Search Results

Search found 5307 results on 213 pages for 'mr typing sounds'.

Page 163/213 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Work from home on an iPad?

    - by Alex Basson
    The situation: My wife has a 13" MacBook Pro that she uses for email, Facebook, web surfing, and working from home. I'm about to buy us our first iPad. My wife's brother's computer just went belly-up, and she's contemplating giving him her MacBook and just using the iPad. The question is whether or not this is possible or realistic. Obviously, the iPad is well-suited for the email/web/Facebook tasks, but the working-from-home thing is an absolute must -- if the iPad can't handle that, it's a deal-breaker. For my wife, working from home means two things: Accessing her workplace computer's Windows Vista desktop, which she currently does via Remote Desktop. Editing Office documents locally, which she currently syncs via Dropbox. Being able to edit documents locally is important, because sometimes she will download documents and edit them when she doesn't have network access (e.g. on the subway). I'm more than happy to get a keyboard dock for her, so typing won't be an issue. Are there any iPad apps she can use to access her work computer and edit her work files? Thanks for any suggestions!

    Read the article

  • How to install Windows 7 on a MacBook with HDDs and no optical drive, without rEFIt

    - by user1238528
    I just removed the SuperDrive on my MacBook Pro and replaced it with an SSD. So now my laptop has a SSD, and a HDD, but no optical drive. I have Lion on the SSD and I want to install Windows 7 on the HDD. Unfortunately, Boot Camp only will install Windows off of the Windows DVD. I have made a bootable Windows 7 thumb drive but my MacBook Pro won’t boot off it. So my question is how can I install Windows on the other HDD? I have thought about maybe using Oracle VirtualBox to install it on that hard drive, but I don’t know if that would allow me to boot directly into Windows. I really don't want to go down the whole virtualization route. I know I could just take out the SSD, put back in the optical drive, run the Windows 7 DVD, take the optical drive back out, put the SSD back in. But that sounds like a nightmare. Also, I really don’t want to use things like rEFIt. Any advice?

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • connecting to server with multiple nics in other vlan

    - by Thierry
    I have a windows 2003 server with 3 nics on 3 vlan's (this is in domain 1). nic 1 has a default gateway to my router/firewall (sonicwall). In nic 2 and 3 I have left it empty, because it is advised like that everywhere. Within this domain and VLAN's 1-3 everything works fine. BUT... I have a second domain (domain 2) with a 4th Vlan (all 4 VLAN's connected to the same router/firewall) from which my clients need to access the 2003 server in domain 1 (it's my antivirus management console for both domains). when i ping the server from my vlan4 by it's FQDN, it randomly chooses ip from nic 1, 2 or 3 from my 2003 server. (logically because that server is know in DNS with it's 3 IP-addresses. And that is needed for my VLAN's 1-3) I don't really have a problem with that. BUT, I only get an answer of NIC1 (which sounds logically to me, because it's the only one with a gateway). It is not a router problem, because I'm testing in this phase and ping from vlan4 to any machine in vlan1, 2 or 3 that has 1 nic works just fine. If i add a gateway to nic2 and nic3, I get answer from all 3 nics and this works fine. But I know it's adviced to not do that. Can anyone give me advice in this particular case? Would it really be a problem to add a gateway to nic 2 and 3? They would be pointing to the same router/firewall (only with different ip-address, based on the vlan). Or is there another good solution to fix this problem? Thank's in advance, Thierry.

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Windows 7 starts getting sluggish over a few days

    - by munrobasher
    Myself and the other developer are running Windows 7 Enterprise 64 bit with 8GB RAM on different Gigabyte motherboards with Quad core Intel CPUs. Most of the time, it runs like a dream. We use VMware workstation a lot (hence the 8GB) and that works well. Except... now and then, after the PCs have been on for a few days, the whole system starts getting really sluggish doing certain tasks. The other's developer's system is far worse than mine with it taking up to a minute to launch IE. Today, mine has gone sluggish but nowhere near as bad. For example, normally when I click on a new tab in IE, it's instant. Today, there's an obvious delay. Right-clicking in this window to trigger iSpell is normally instant, right now it takes about five seconds. I've got resource monitor open on my second monitor and when I did that right-click, there was no obvious peak in CPU, disk or memory. A reboot does fix it so it does sound like a resource issue but haven't a clue what might be to blame. The two computers have similarities (same spec) but also differences (like motherboard, RAM & CPU models). So I guess the question is, any pointers on diagnosing why a PC is sluggish? What could cause such a right-click slow down in IE for example? It sounds like such a simple operation. NOTE: whilst typing this message alone, it was fine performance wise. I can click around the page no problem but right-click still is noticeable slow. Will reboot over lunch... Cheers, Rob.

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • Why does a document in Word 2007 stop recognizing the mouse after the document loses focus?

    - by alt234
    When I open a document in Word 2007, everything works fine, I can edit, highlight text, etc. However, the instant Word loses focus, when I focus back the document doesn't recognize anything the mouse does. The tabbed menu at the top seems to recognize the mouse but the document itself does not. I can scroll through via the scroll-wheel and I can type. However, typing just shows up where the mouse cursor last was before focus was taken away. I've tried clearing some word data registry keys. I've also found that some Word Add-ins can cause problems. LaserFiche is one I see mentioned a lot. As far as I can tell I have no add-ons though. Any ideas? It's crazy-annoying. UPDATE- - Word is the only program that has this problem - Typically I have Toad (Oracle DB management app), an XP virtual machine with various apps running on it, Skype, Google Talk, and maybe a handful of other programs at any given time open... Windows Media player, Outlook. - Yes, this happens even if nothing else is running. From a fresh restart as well. - I'm running Vista 64 with SP1 - According to Windows Update, I have the latest of everything. This has been happening for a couple of months now. Just never took the time to look into because I usually never have to use word.

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • How to connect via SSH to a linux mint system that is connected via OpenVPN

    - by Hilyin
    Is there a way to make SSH port not get sent through VPN so when my computer is connected to a VPN, it can still be remoted in via SSH from its non-VPN IP? I am using Mint Linux 13. Thank you for your help! This is the instructions I followed to setup the VPN: Open Terminal Type: sudo apt-get install network-manager-openvpn Press Y to continue. Type: sudo restart network-manager Download BTGuard certificate (CA) by typing: sudo wget -O /etc/openvpn/btguard.ca.crt http://btguard.com/btguard.ca.crt Click on the Network Manager icon, expand VPN Connections, and choose Configure VPN A Network Connections window will appear with the VPN tab open. Click Add. 8. A Choose A VPN Connection Type window will open. Select OpenVPN in the drop-down menu and click Create.. . In the Editing VPN connection window, enter the following: Connection name: BTGuard VPN Gateway: vpn.btguard.com Optional: Manually select your server location by using ca.vpn.btguard.com for Canada or eu.vpn.btguard.com for Germany. Type: select Password User name: username Password: password CA Certificate: browse and select this file: /etc/openvpn/btguard.ca.crt Click Advanced... near the bottom of the window. Under the General tab, check the box next to Use a TCP connection Click OK, then click Apply. Setup complete! How To Connect Click on the Network Manager icon in the panel bar. Click on VPN Connections Select BTGuard VPN The Network Manager icon will begin spinning. You may be prompted to enter a password. If so, this is your system account keychain password, NOT your BTGuard password. Once connected, the Network Manager icon will have a lock next to it indicating you are browsing securely with BTGuard.

    Read the article

  • Building vs buying a server for an academic lab [closed]

    - by Roy
    I'm looking for advice on the classic build vs buy question. We need a new linux server to run Matlab computation on in our lab (academic). Matlab parallel computing toolbox licence allows up to 12 local workers so we are aiming at a 12 core server with 4GB memory per core (total of 48gb). The system will have an SSD for the OS and a raid-5 (4x2tb) for data. I looked around and found a (relatively) cheap vendor, Silicon Mechanics, that offers a system to our liking (specs below) for $6732. However, buying the components from newegg cost only $4464! The difference is $2268 which is 50% of the base cost. If buying from a company can be thought of as a sort of insurance, basically my premiums are of 50% of the base cost which to me sounds like a lot. Of course any downtime is bad, but the work is not "mission critical", i.e. if it takes a few days to fix it when it breaks its no the end of the world. If it takes weeks to months then its a problem. If it breaks 2-3 times in 3 years, not too bad. If it breaks every month not good. In term of build experience, I set up a linux cluster in grad school (from existing computers) and I build my home pcs but I never built a server before. The server components I'm thinking about: 1 x SUPERMICRO SYS-7046T-6F 4U Tower Server Barebone Dual LGA 1366 Intel 5520 DDR3 1333/1066/800 ($1,050) 12 x Kingston 4GB 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) ECC Unbuffered Server Memory ($420) 2 x Intel Xeon E5645 Westmere-EP 2.4GHz LGA 1366 80W Six-Core ($1,116) 4 x Seagate Constellation ES 2TB 7200 RPM SATA 6.0Gb/s 3.5" ($1,040) 1 x SAMSUNG Internal DVD Writer Black SATA ($20) 1 x Intel 520 Series 2.5" 180GB SATA III MLC SSD $300 1 x LSI LSI00281 PCI-Express 2.0 x8 MD2 Low profile SATA / SAS MegaRAID SAS 9260CV-4i Controller Card, $695

    Read the article

  • Driver denied access to PCI card

    - by Corin
    Alright, I asked this on StackOverflow (here) and they suggested trying ServerFault to get help on permissions. So here's the deal. We designed a custom PCI card and wrote the driver for it. It's been working for years without problems but now we encountered one particular installation were it doesn't work. The problem is that we cannot connect to the PCI to begin communication with it. We tried replacing the card and had the same problem. We had the motherboard replaced thinking the PCI slots were bad. That didn't help either. We tried the cards in a different computer and they all worked. So it seemed to be something specific to the computer. The Windows Device Manager indicates the device is working properly and seems to have all the correct driver info. We now have this troublesome computer back at the office for testing. With the help of some extra debug info in the driver we determined that we cannot connect because access is denied. Sounds like a permissions issue to me. I should note that we are logged into the system as a local administrator. So what configuration option in Windows can prevent access to a device?

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • How can I configure myhostname to work with Postfix?

    - by John Kelly Ferguson
    I'm going through the process of setting up a Discourse forum on my server (Ubuntu 12.04 x64) and am getting stuck at the point where I have to configure mailers. I'm following Discourse's instructions and am stuck trying to configure postfix for Mandrill. It is says to check my fully-qualified domain name by typing hostname -f When I enter in hostname -f, I get localhost. As far as I know, entering in hostname -f should return mydomainname.com. When I just enter in hostname, I get mydomainname which is correct because that is what I set my hostname to in /etc/hostname. Looking at some of my other settings, my /etc/hosts file reads: 127.0.0.1 localhost mydomainname # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters And in my /etc/postfix/main.cf file, I have myhostname set like this: myhostname = mydomainname.mydomainname.com (Should this be myhostname = mail.mydomainname.com instead?) And mydestination is the following: mydestination = mydomainname.com, localhost, localhost.localdomain, localhost I'm not that familiar with configuring hostnames. I've been reading Postfix's instructions, but haven't been able to figure it out yet. Any help on how to get this to work would be greatly appreciated. Thanks.

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Host Name Resolution - ISA 2006 - VPN PPTP

    - by Brian Lee Jackson
    We are running an ISA 2006 server and PPTP VPN connection works fine. Clients are able to connect to internet, access Outlook, CRM, etc. The problem we are encountering is that host name resolution is not working. Example, when connected via VPN I can’t ping any box other than the VPN server by the host name. Nslookup also fails. I can ping everything fine via IP address. But for clients, they need to be able to access their “mapped” drives over the VPN which all are mapped by host name. I recently took over this position and it sounds like this used to work. What would be the best place to check first? I haven’t had much exposure to ISA and have been reading up a bit on installation procedures, etc. DNS is hosted and running on our domain controller, as well as WINS. It isn’t on the ISA box. Is there a firewall policy that perhaps got removed? What usually is required for host name resolution to pass through. Any help would be appreciated, thanks!

    Read the article

  • How do I keep a table in Sync across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Howto setup neocomplcache?

    - by eddy
    I just started using vim and saw a cool plugin: [neocomplcache].(http://www.vim.org/scripts/script.php?script_id=2620) My Problem is, that I can't get it to work properly. After installing, I took the example config from the help files of neocomplcache and added the lines to my .vimrc At first I wanted to create a simple latex file (there are snippets for tex). After typing "begi" there appears a menu, I can choose between the snippets with TAB or <C-n>. But how do I get them to expand? <C-k> does not work, but I don't understand why. ======== .vimrc: ======== .... " Plugin key-mappings. imap <C-k> <Plug>(neocomplcache_snippets_expand) smap <C-k> <Plug>(neocomplcache_snippets_expand) inoremap <expr><C-g> neocomplcache#undo_completion() inoremap <expr><C-l> neocomplcache#complete_common_string() " Recommended key-mappings. " <CR>: close popup and save indent. inoremap <expr><CR> neocomplcache#smart_close_popup() ."\<CR>" " <TAB>: completion. inoremap <expr><TAB> pumvisible() ? "\<C-n>" : "\<TAB>" " <C-h>, <BS>: close popup and delete backword char. inoremap <expr><C-h> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><BS> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><C-y> neocomplcache#close_popup() inoremap <expr><C-e> neocomplcache#cancel_popup() ...

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >