Search Results

Search found 15698 results on 628 pages for 'keep alive'.

Page 561/628 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • Is it possible to change an "Unidentified Network" into a "Home" or "Work" network on Windows 7

    - by Rhys
    I have a problem with Windows 7 RC (7100). I frequently use a crossover network cable on WinXP with static IP addresses to connect to various industrial devices (e.g. robots, pumps, valves or even other Windows PCs) that have Ethernet network ports. When I do this on Windows 7, the network connection is classed as an "Unidentified Network" in Networks and Sharing Center and the public firewall profile is enforced by Windows. I do not want to change the public profile and would prefer to use the Home or Work profile instead. For other networks like Home and Work I'm able to click on them and change the classification. This is not available for unidentified networks. My questions are these:- Is there a way to manual override the "Unidentified Network" classification? What tests are performed on the network that fail, therefore classifying it as an "Unidentified Network" By googling (hitting mainly vista issues) it seems that you need to ensure that the default gateway is not 0.0.0.0. I've done this. I've also tried to remove IPv6 but this does not seem possible on Windows 7. UPDATE For those still having problems here is the answer to my issue and the possible reasons why:- Win7 keeps a list of the networks you visit by (I am assuming, but don’t know for sure) the MACID of the device pointed to by the Default Gateway. The default gateway is usually the constant device in a network (i.e. the NAT or router) so can be used to uniquely identify one network from another. The default gateway in the IPv4 properties panel must therefore point to an actual endpoint so windows can then keep track of it. If there is a device at the end of the Default Gateway windows will identify it and track it remembering its settings. The ways you can therefore fool Win7 is to either point the default gateway to your own IP address, or the IP address of the target device you’re communicating with. This will have the side effect of expecting that target device to start routing packets for IP destinations that are outside your subnet. So some applications on Win7 will try to communicate with the internet, these will be passed on to the default gateway (either back you the same IP address or a target device that is not a router) and thus will eventually timeout because neither can route packets. Which you can usually live with. This gets slightly complicated when you mix a this type of connection with a real connection to the internet via WIFI. The wired network card usually has priority when routing because of the “interface metric” so some applications might not connect correctly.

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • Windows server detected error with hard disk

    - by user53864
    We have hosting Windows server 2008 R2 and I am working as admin in small company. The server is hanging and restarting as the hard disk seems to be damaged due to power fluctutaion(though having inverter) as it's showing the below error message on server reboot: Problem detected with the hard disk Press any key to continue It's Seagate 1TB SATA hard disk and it's booting after pressing enter. So it's clear that the hard disk is dying. Yes, it's in warranty but it's fact that warranty won't recover the lincesed windows server 2008 and it's data. As it's booting now, I backed up required things and I am thinking to clone the entire hard disk. The first thing it striked me is checking on the Seagate site if any tool available for cloning and I found Seagate DiskWizard but not specified it for windows server 2008. Please anybody could help me giving your best ideas for the below: Urgently, What's the best way(free of cost) for me to clone in my case with the new same sized hard disk? It's a one time lincenced and I cannot use the same key again if I reinstall the server. Will the lincense be carried with new disk if cloned? else there is a way to contact Microsoft explaining the problem occurred, to obtain new key for no charge?. I want to take measure for future. How do I keep two disks in continuous sync? mirrored & raid are the only options converting the disks to dynamic? or is there a best way I could do with no additional charge?. Any help is greatly appreciated. Thank you! EDIT:1 I started cloning the disk with CloneZilla and it was going proper showing in GUI. But after some time there is no GUI but a black screen with some codes(looks like disk location numbers) going page by page(I have attached the screenshots below captured from my phone). Do you people think it's actually cloning?. I started in the morning and it's evening now. I left the office now to let it finish what it's trying to do and I'll go & check it tomorrow. Slowly lost hope, don't know what face it's going to show tomorrow. Any ideas?

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • Switch flooding when bonding interfaces in Linux

    - by John Philips
    +--------+ | Host A | +----+---+ | eth0 (AA:AA:AA:AA:AA:AA) | | +----+-----+ | Switch 1 | (layer2/3) +----+-----+ | +----+-----+ | Switch 2 | +----+-----+ | +----------+----------+ +-------------------------+ Switch 3 +-------------------------+ | +----+-----------+----+ | | | | | | | | | | eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) | | +----+-----------+----+ | | | Host B | | | +----+-----------+----+ | | eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) | | | | | | | | | +------------------------------+ +------------------------------+ Topology overview Host A has a single NIC. Host B has four NICs which are bonded using the balance-alb mode. Both hosts run RHEL 6.0, and both are on the same IPv4 subnet. Traffic analysis Host A is sending data to Host B using some SQL database application. Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5. Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA. Once the TCP connection has been established, Host B sends no further frames out eth5. The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2. Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5. Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course). Reproduce If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops. After five minutes (the default bridge table timeout), flooding resumes. Question It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding. Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)? I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.

    Read the article

  • Change Logon DPI setting in Windows 8.1

    - by jmc302005
    I love how M$ keeps making decisions for me about how I want my desktop to look. Now they have added per-user dpi settings. The problem this has created is that there is no adjustable dpi setting for the Lock/Logon screen. Let me explain. you can change the dpi setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen tv for my monitor on my desktop. Big enough for me to sit in my recliner and use my comp. But I don't have a wireless keyboard. And it sucks having the wire from the keyboard running across the floor. Plus I really don't want to keep a keyboard next to me. So I use the on screen keyboard for logging in and quick typing (search, web address, etc.) So the problem is that with the new dpi setup my on screen keyboard takes up nearly half the screen. Does M$ think we are all blind? Oh no I remember they think desktops should look like tablets and phones. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS.DEFAULT\Control Panel\Desktop there is a string value named "LogicalDPIOverride" with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. So now M$ has also added a way for us to not be able to change a setting in the registry. They are making it harder and harder for us power users to be able to do anything with the settings in windows. Soon we will all have the same exact Windows with absolutely no customization. ok sorry for the quick rant. The real question here is. How can I change this defualt dpi crap? Can I use the LogPixels string that worked for dpi in Windows 7? Here are 2 Screen shots 1 of the Lock Screen and 1 of the Logon Screen http://i.imgur.com/6RM5ufE.jpg http://i.imgur.com/cnY5bmm.jpg Please any help will be appreciated.

    Read the article

  • Add user in CentOS 5

    - by Ron
    I created a new user in my CentOS web server with useradd. Added a password with passwd. But I can't log in with the user via SSH. I keep getting 'access denied'. I checked to make sure that the password was assigned and that the account is active. /var/log/secure shows the following error: Aug 13 03:41:40 server1 su: pam_unix(su:auth): authentication failure; logname= uid=500 euid=0 tty=pts/0 ruser=rwade rhost= user=root Please help, Thanks Thanks for the responses so far: I should add that it is a VPS on a remote computer, fresh out of the box. I can log in as the root user quite fine. I can also su to the new user, but I cannot log in as the new user. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • HOW TO Convert DVD to iPad(Also converts to iPods)?

    - by goodm
    DVD to iPad Converter (Also converts to iPods) DVD to iPad Converter is the easiest-to-use and fastest DVD to iPad converter for Apple iPad movie and iPad video. It can convert almost any kind of DVD to iPad movie or iPad video format. It is also a powerful DVD to iPad converter with a conversion speed that is much faster than real-time. With this converter, you can use your iPad as a portable DVD player and enjoy your favorite DVDs on your iPad. http://www.softseeking.com/prodail.aspx?proid=83" Features of this DVD to iPad Converter Three Running Modes --In Direct Mode, you can directly click the DVD menu to select the movie you want to rip. This mode is very easy for ripping DVD movies. --In Batch Mode, you can select the DVD titles or chapters you want to rip via a checkbox list. This mode is very easy for batch ripping music DVDs, MTV DVDs and episodic DVDs. --In 1-Click Mode, you just need one click to open a DVD, after which the rest of the task will be done automatically. This is a “designed-for-dummies” mode. Input Types You can convert almost any kind of DVD format to iPad. Output Splitting You can split your output video by DVD chapters and titles. Fully supports MTV DVDs and episodic DVDs. File Size and Quality Adjustment You can customize the output file size and corresponding video quality. Flexible Output Profiles You can easily customize the various video settings such as brightness, bit rate, etc. Language Selection for Subtitles and Audio Track In Direct mode and in Batch mode, you can select the subtitle and audio track language. (In 1-Click mode, the default language is chosen automatically). Video Crop You can crop your video to 16:9, 4:3, full screen, etc. Video Resize You can resize your video. For example, you can set it to "Keep aspect ratio" or "Stretch to fit screen." Other Converts DVD to MP3 audio. Supports Dolby, DTS Surround audio track. Converts to the latest iPhone, 4th generation iPad nano, nano chromatic, 2nd generation iPad touch, and Apple TV.

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • How can I create an appointment on a shared Outlook calendar

    - by roryhewitt
    This isn't as basic a question as it may seem. Hence all the descriptive text... I and my team use Outlook 2007. I have my own personal calendar and also I share a calendar with others in my team (which is mainly used to notify everyone of vacations etc.). The others in my team are NOT technical people. I would like to create a shortcut or template that any of us can use to create an appointment in the shared calendar. My initial though was to create a new appointment, but rather than actually put it in the calendar, I would save it as an Outlook Template (.oft) file. Once created, I would send this to my team and tell them to put it in their Templates file and put a shortcut on their desktop. Then, if they want to put a vacation in the shared calendar, they just double-click on the shortcut, change the dates etc. and then save & close it. However, when I do that, it doesn't save the fact that it's an appointment on the shared calendar - it just adds the appointment to the team member's personal calendar. There doesn't seem to be a way to specify a calendar in the template. I've also tried this by saving the template as a .ics or .vcs file, with no better luck. Additionally, if a team member adds an appointment to the shared calendar, other 'sharees' aren't notified, unless the appointment is actually created as a meeting and the other sharees are explicitly invited. I found this online (http://office.microsoft.com/en-us/outlook-help/keep-everyone-informed-about-time-away-from-the-office-HA010209819.aspx) which APPEARS to say that what I want to do isn't built in functionality (since it shows a bunch of steps to go through. I'd PREFER not to have to add this stuff to everyone else's personal calendar directly. So... Is this possible to do, natively (i.e. directly in Outlook)? Would a Sharepoint calendar make more sense and allow this functionality? Is there a way to do what I want which will allow the other team members to be notified? Like I said, I'm looking for as simple an interface as possible - these people aren't going to want to do much more than open something and change dates. Additionally, they're probably not going to have any fancy software on their PC's, although they will be up to date with Java and (maybe) .NET frameworks. Also, before anyone gets funny, yes, this has to work with Outlook 2007, as it's a corporate standard - we're not able to change that, even though e.g. Google Calendar would do this wonderfully. Obviously if this functionality is available in Outlook 2010, then fantastic - we might be able to upgrade. Thanks!

    Read the article

  • freebsd-update from 8.3-RELEASE to 9.0-RELEASE: How to deal with dozens of diffs?

    - by Stefan Lasiewski
    I am upgrading a FreeBSD 8.3-RELEASE system to FreeBSD 9.0-RELEASE using freebsd-update. This is my first time performing a major version upgrade in FreeBSD. At one point in the process, freebsd-update performs a diff on files which are different then what is expected for the 9.0-RELEASE. It compares the current version on the system with the new changes added from 9.0-RELEASE. There are dozens of files in the list. Thus, I am presented with dozens and dozens of diffs which open in a vi window and look like this: The following file could not be merged automatically: /etc/ntp.conf Press Enter to edit this file in vi and resolve the conflicts manually... ### vi window opens <<<<<<< current version driftfile /etc/ntp/drift ======= # # $FreeBSD: release/9.0.0/etc/ntp.conf 195652 2009-07-13 05:51:33Z dwmalone $ # # Default NTP servers for the FreeBSD operating system. # # Don't forget to enable ntpd in /etc/rc.conf with: # ntpd_enable="YES" # # The driftfile is by default /var/db/ntpd.drift, check # /etc/defaults/rc.conf on how to change the location. # >>>>>>> 9.0-RELEASE restrict default notrust nomodify ignore And so on. This requires that I manually edit each file and remove the strings like <<<<<<< current version >>>>>>> 9.0-RELEASE and =======. As I discovered afterwards, if I don't remove these strings, they end up in the file afterwards. There are dozens of files which differ between 8.3 and 9.0, and I have a dozen local modifications myself. It appears that freebsd-update is using a diff, sdiff or mergemaster function of some sort, but I can't tell what it is doing exactly. Processing these files is tedious. Is there a way that I can just say "Accept new version" or "keep old version" or "Your merge is correct"? There has got to be an easier way to deal with these files. I must be missing something. This isn't a huge problem for one machine, but eventually I'll be doing this dozens of times and I want to find an easier way.

    Read the article

  • Second HDD not seen by Windows 7 on Dell Xps l501x

    - by George
    I have a Dell XPS Laptop (l501x). I have replaced the original Seagate 500GB hard drive with an SSD Intel 320 120GB when I first purchased it a year ago. It's been working great. The laptop is booting in about 23 seconds, so the SSD is great. I have an Acronis image created that I came back to every three months just to keep everything clean. The SSD is partitioned with one logical drive for my data. So, recently I thought since I am not using my optical drive often enough to swap it out with a HDD caddy and add my seagate 500gb hard drive. I ordered the caddy placed the HDD in it and now load Windows. It just hangs on the screen that should show the Windows logo. I have tried everything that I know and searched it online. I have uninstalled the SATA controller AHCI and let Windows install it. Still will not boot into windows. I wanted to mention that the Seagate 500GB drive was the one that came with my laptop before I switched to the Intel SSD. As you know Intel has this application called Intel Rapid Technology which loads once in a while and shows the second hard drive, but then, when I restart it hangs again and Windows will not load. As soon as I remove the HDD Caddy and restart it loads Windows fine. I also formated the Seagate 500GB HDD in NTFS and still Windows will not load. When I go into the BIOS it shows the Fixed SSD and also "Sata ODD 500GB" instead of the optical drive but it will not boot into Windows when the HDD caddy is present. There is nothing wrong with the caddy. I have another laptop (Asus) and I installed the HDD caddy and Windows 7 loads without any glitch. I don't get it. I have also flashed the BIOS because Dell had a new version (A08). I also wanted to add that I refreshed Disk Management and the Device Manager and the second drive does not display. At this point I think it's a Windows issue so before I reinstall Windows 7 Home Premium from scratch I wanted to see if there was anything I was missing. Any advice would be greatly appreciated.

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • Add user in CentOS 5

    - by Ron
    I created a new user in my CentOS web server with useradd. Added a password with passwd. But I can't log in with the user via SSH. I keep getting 'access denied'. I checked to make sure that the password was assigned and that the account is active. /var/log/secure shows the following error: Aug 13 03:41:40 server1 su: pam_unix(su:auth): authentication failure; logname= uid=500 euid=0 tty=pts/0 ruser=rwade rhost= user=root Please help, Thanks Thanks for the responses so far: I should add that it is a VPS on a remote computer, fresh out of the box. I can log in as the root user quite fine. I can also su to the new user, but I cannot log in as the new user. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

  • When upgrading from Vista to Windows 7 on a DELL laptop, how do I know which drivers to reinstall an

    - by msorens
    According to Dell's upgrade page for Vista to Windows 7, after using the upgrade assistant the final step is to install drivers. They refer to this page for the order of driver installation, listing 9 items. From there I go to the Dell Drivers and Downloads page, enter my system tag, and get a list of the downloads available for my specific box. That page, by the way, has a link to driver install instructions that lists 10 rather than 9 items. Going to Drivers Help in the side panel and clicking on "In what order should drivers be installed?" shows yet a third list, this one containing 13 items. Not surprisingly, the order of these 3 lists of drivers are not quite the same for the common items! Furthermore, of the 26 files Dell's site recommends for my machine, there are several not shown on any of the 3 lists! I can make determinations for some of these: 6 of them are "applications" so I know which of those I want and that they could probably be safely installed after all drivers. BIOS: I would think this should be unaffected by an OS upgrade so could be skipped. Two tools in the diagnostics category: could probably be done after all drivers. That leaves just a CD/DVD driver and a webcam driver unaccounted for. So my two related questions are these: How critical is the driver installation order and which one do I follow? (Keep in mind this is for an upgrade, not a fresh install.) Where in the order do I insert the CD/DVD and the webcam drivers (if needed) ? Dell's driver download page provides (in theory) the list of all downloads relevant to my specific machine, via the service tag. But do I actually need to reinstall all of them? some? none? How does one determine this? They do label each with Recommended or Optional, so do I need to reinstall all the recommended ones? (Part of the reason for my perplexed frown is that I wonder why I would need to reinstall a CD/DVD driver since I would already be using the drive to install the OS!)

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • Script to check a shared Exchange calendar and then email detail

    - by SJN
    We're running Server and Exchange 2003 here. There's a shared calendar which HR keep up-to-date detailing staff who are on leave. I'm looking for a VB Script (or alternate) which will extract the "appointment" titles of each item for the current day and then email the detail to a mail group, in doing so notifying the group with regard to which staff are on leave for the day. The resulting email body should be: Staff on leave today: Mike Davis James Stead @Paul Robichaux - ADO is the way I went for this in the end, here are the key component for those interested: Dim Rs, Conn, Url, Username, Password, Recipient Set Rs = CreateObject("ADODB.Recordset") Set Conn = CreateObject("ADODB.Connection") 'Configurable variables Username = "Domain\username" ' AD domain\username Password = "password" ' AD password Url = "file://./backofficestorage/domain.com/MBX/username/Calendar" 'path to user's mailbox and folder Recipient = "[email protected]" Conn.Provider = "ExOLEDB.DataSource" Conn.Open Url, Username, Password Set Rs.ActiveConnection = Conn Rs.Source = "SELECT ""DAV:href"", " & _ " ""urn:schemas:httpmail:subject"", " & _ " ""urn:schemas:calendar:dtstart"", " & _ " ""urn:schemas:calendar:dtend"" " & _ "FROM scope('shallow traversal of """"') " Rs.Open Rs.MoveFirst strOutput = "" Do Until Rs.EOF If DateDiff("s", Rs.Fields("urn:schemas:calendar:dtstart"), date) >= 0 And DateDiff("s", Rs.Fields("urn:schemas:calendar:dtend"), date) < 0 Then strOutput = strOutput & "<p><font size='2' color='black' face='verdana'><b>" & Rs.Fields("urn:schemas:httpmail:subject") & "</b><br />" & vbCrLf strOutput = strOutput & "<b>From: </b>" & Rs.Fields("urn:schemas:calendar:dtstart") & vbCrLf strOutput = strOutput & "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<b>To: </b>" & Rs.Fields("urn:schemas:calendar:dtend") & "<br /><br />" & vbCrLf End If Rs.MoveNext Loop Conn.Close Set Conn = Nothing Set Rec = Nothing After that, you can do what you like with srtOutput, I happened to use CDO to send an email: Set objMessage = CreateObject("CDO.Message") objMessage.Subject = "Subject" objMessage.From = "[email protected]" objMessage.To = Recipient objMessage.HTMLBody = strOutput objMessage.Send S

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

  • Reasons for missing IP info in `last` output on pts logins?

    - by Mike Pennington
    I have five CentOS 6 linux systems at work, and encountered a rather strange issue that only seems to happen with my userid across all the linux systems I have... This is an example of the problem from entries I excepted from the last command... mpenning pts/19 Fri Nov 16 10:32 - 10:35 (00:03) mpenning pts/17 Fri Nov 16 10:21 - 10:42 (00:21) bill pts/15 sol-bill.local Fri Nov 16 10:19 - 10:36 (00:16) mpenning pts/1 192.0.2.91 Fri Nov 16 10:17 - 10:49 (12+00:31) kkim14 pts/14 192.0.2.225 Thu Nov 15 18:02 - 15:17 (4+21:15) gduarte pts/10 192.0.2.135 Thu Nov 15 12:33 - 08:10 (11+19:36) gduarte pts/9 192.0.2.135 Thu Nov 15 12:31 - 08:10 (11+19:38) kkim14 pts/0 :0.0 Thu Nov 15 12:27 - 15:17 (5+02:49) gduarte pts/6 192.0.2.135 Thu Nov 15 11:44 - 08:10 (11+20:25) kkim14 pts/13 192.0.2.225 Thu Nov 15 09:56 - 15:17 (5+05:20) kkim14 pts/12 192.0.2.225 Thu Nov 15 08:28 - 15:17 (5+06:49) kkim14 pts/11 192.0.2.225 Thu Nov 15 08:26 - 15:17 (5+06:50) dspencer pts/8 192.0.2.130 Wed Nov 14 18:24 still logged in mpenning pts/18 alpha-console-1. Mon Nov 12 14:41 - 14:46 (00:04) You can see two of my pts login entries above that do not have a source IP address associated with them. My CentOS machines have as many as six other users that share the systems, but the mpenning userid is the only one that has this issue. Approximately 5% of my logins see this issue, but no other usernames exhibit this behavior. Questions Given the kind of scripts I keep on these systems (which control much of our network infrastructure), I'm a little spooked by this and would like to understand what would cause my logins to occasionally miss source addresses. Is there anything (other than malicious activity) that would reasonably explain the behavior? Other than bash history timestamping, are there other things I can do to track the issue down? Informational Since this started happening, I enabled bash history time-stamping (i.e. HISTTIMEFORMAT="%y-%m-%d %T " in .bash_profile) and also added a few other bash history hacks; however, that does not give clues to what happened during the previous occurrences. All the systems run CentOS 6.3... [mpenning@typo ~]$ uname -a Linux typo.local 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux [mpenning@typo ~]$ EDIT If I use last -i mpenning, I see entries like this... mpenning pts/19 0.0.0.0 Fri Nov 16 10:32 - 10:35 (00:03) mpenning pts/17 0.0.0.0 Fri Nov 16 10:21 - 10:42 (00:21)

    Read the article

  • Email client wont connect to SMTP Authentication server

    - by Jason
    Im having trouble installing SMTH Auth for my ubuntu email server. I have followed ubuntu own guide for SMTH AUT (https://help.ubuntu.com/14.04/serverguide/postfix.html). But my email client thunderbird is giving this error " lost connection to SMTP-client 127.0.0.1." I cant add new users to thundbird either because of this connection problem. Do i have to alter any setting on my Thunderbird perhaps since ? I did try to make thunderbird use SSL for imap as well but that neither works. I restarted postfix and dovecot to find errors but both run just fine. Prior to SMTP auth changes thunderbird could connect just fine to my server and send mails. This is my main.cf file in postfix. It looks just like the one on ubuntu guide above. readme_directory = no # TLS parameters #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mysite.com mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir/ mailbox_command = #SMTP AUTH smtpd_sasl_type = dovecot smtpd_recipient_restrictions=permit_mynetworks, permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_auth_only = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes This my dovecot configuration at 10-master.conf service imap-login { inet_listener imap { #port = 143 } inet_listener imaps { #port = 993 #ssl = yes } # Number of connections to handle before starting a new process. Typically # the only useful values are 0 (unlimited) or 1. 1 is more secure, but 0 # is faster. <doc/wiki/LoginProcess.txt> #service_count = 1 # Number of processes to always keep waiting for more connections. #process_min_avail = 0 # If you set service_count=0, you probably need to grow this. #vsz_limit = $default_vsz_limit } service pop3-login { inet_listener pop3 { #port = 110 } inet_listener pop3s { #port = 995 #ssl = yes } } service lmtp { unix_listener lmtp { #mode = 0666 } # Create inet listener only if you can't use the above UNIX socket #inet_listener lmtp { # Avoid making LMTP visible for the entire internet #address = #port = #} } service imap { # Most of the memory goes to mmap()ing files. You may need to increase this # limit if you have huge mailboxes. #vsz_limit = $default_vsz_limit # Max. number of IMAP processes (connections) #process_limit = 1024 } service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service auth { unix_listener auth-userdb { #mode = 0600 #user = #group = } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } service dict { # If dict proxy is used, mail processes should have access to its socket. # For example: mode=0660, group=vmail and global mail_access_groups=vmail unix_listener dict { #mode = 0600 #user = #group = } } I did add auth_mechanisms = plain login to 10-auth.conf as well.

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Apache2 Virtual Host broken; displayed default index.html on subdomain, but correct content on www.subdomain

    - by Robert K
    I've got a Linode configured as a Ubuntu 10.04.2 web server with Apache 2.2.14. I have a total of 4 sites, all defined under /etc/apache2/sites-available as virtual hosts. All sites are almost identical clones for configuration. And all sites but my last work successfully. default: (www.)exampleadnetwork.com (www.)example.com reseller.example.com trouble: client1.example.com I keep getting this page when I visit the client1.example.com site: It works! This is the default web page for this server. The web server software is running but no content has been added, yet. In my ports.conf file I have the NameVirtualHost correctly set to my IP address on port 80. If I access the "www.sub.example.com" alias the site works! If I access it without the www I see the "It Works" excerpt posted above. Even apache2ctl -S shows that my vhost file parses correctly and is added to the mix. My vhost configuration file is as follows: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName client1.example.com ServerAlias client1.example.com www.client1.example.com DocumentRoot /srv/www/client1.example.com/public_html/ ErrorLog /srv/www/client1.example.com/logs/error.log CustomLog /srv/www/client1.example.com/logs/access.log combined <directory /srv/www/client1.example.com/public_html/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </directory> </VirtualHost> The other sites are variations of: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName example.com ServerAlias example.com www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined </VirtualHost> The only site the differs is the other subdomain: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName reseller.example.com ServerAlias reseller.example.com DocumentRoot /srv/www/reseller.example.com/public_html/ ErrorLog /srv/www/reseller.example.com/logs/error.log CustomLog /srv/www/reseller.example.com/logs/access.log combined </VirtualHost> Filenames are the FQDN without the www. prefix. I've followed this advice, but still cannot access subdomain properly.

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >