Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 394/943 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Mount EC2 instance via SSH on Mac OS X

    - by darkporter
    OK I just can't figure this out. I have an EC2 instance, which I'm able to SSH into just fine with: ssh -i XXXX.pem [email protected] I can even make it slick from the command line by creating a ~/.ssh/config with this in it: Host XXXX HostName XXXX User ubuntu IdentityFile ~/.ec2/XXXX.pem Which allows me to simple do a ssh XXXX with no -i option. Now, I want to mount this via SSH. I've tried MacFuse/SSHFS, MacFusion and ExpandDrive, but no luck. It's supposed to "just work" but the SSH-related command line utilities and the Keychain Access program in OS X is confusing and opaque to me. From what I've read, these GUI programs don't care about .ssh/config, they care about the Keychain. Somehow I can associate my domain name I'm connecting to with a particular "identity" private key file (.pem file) but I have no idea how. I tried this: ssh-add -K XXXX.pem Which does add to the Keychain but it's not associated to a particular domain. These GUI mounting programs I mentioned all just spin and do nothing when I try to connect passwordless. No keychain prompt, no nothing. I've pretty much given up and I'm thinking about just setting up an SMB server, but I'd rather just go over SSH since I believe it's possible.

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Bash shell prompt: where is $RET?

    - by Evgeni Sergeev
    I was reading this https://wiki.archlinux.org/index.php/Color_Bash_Prompt and ended up with the following: # Stores the status of each command in $RET PROMPT_COMMAND='RET=$?;' # A colour. RED_SHELL='\e[0;36m' # Prints "Status 1" if RET is 1, for example. RET_VISUALISE='$(if [[ $RET != 0 ]]; then echo -ne "Status \[$RED_SHELL\]$RET\n" && RET=0; fi;)' # What to print for each prompt. PS1="$RET_VISUALISE\[\e]0;\w\a\]\n\[\e[32m\]\u@\h \t \[\e[33m\]\w\[\e[0m\]\n\$ " This does almost what I want, except when I press Enter,Enter,Enter multiple times after a command that returned status != 0. In this case it prints "Status 1" every time I press Enter. This is what the && RET=0; part was supposed to get rid of. Also, I don't understand why env | grep RET only shows the PS1 contents. What is the scope of $RET ?

    Read the article

  • How do I setup a local DNS server on Mac OS Lion?

    - by Peter Kovacs
    I had some serious lag to resolve website address and sometimes things simply wouldn't load (pages kept loading for 5+ minutes without even a timeout error). So I had setup a local dns server/cache using BIND on Leopard and Snow Leopard. Now that I have Lion, i have the same problem, but the instructions no longer apply to Lion and I can't find a way to do it. Has anyone attempted to do this? Are there viable alternatives for DNS servers on OS X 10.7? For those who are wondering I already tried several external DNS server. Only my computer has this issue on the network.

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • [iptables] Why do 'iptables -A OUTPUT -j REJECT' at the end of the chain OUTPUT override the previous rules??

    - by Serge
    Those are my IPTABLES rules: iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p udp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 180 --hitcount 4 --name DEFAULT --rsource -j DROP iptables -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT iptables -A OUTPUT -j REJECT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT Im using a remote ssh conetion to set them up, but after i set: iptables -A OUTPUT -j REJECT My connection get lost. I have read all the documentation for Iptables and i can figure out anything, the global Rejects for INPUT work well because i can access to the web page but i get a timeout for ssh. Any idea? Thanks

    Read the article

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • Syntax error on line 494 of httpd.conf: Cannot load .../php5apache2_2.dll into server

    - by pikachu
    I have been learning PHP. I had installed Apache-server (not in a combination-suite like USBWebserver). Now I'm trying to put my sites on a portable stick, using USBWebserver. I already used that program before to carry MySQL databases with me (and Apache worked as well, cause I used the included PHPMyAdmin for managing the databases.), but now it doesn't work anymore. When I start the program, I keep having the text saying Apache is offline. I've tried to open Apache using the command line (don't know what that would do, but, it's just a try). I got an error message saying Syntax error on line 494 of C:/.../httpd.conf: Cannot load C:/.../php5apache2_2.dll into server: (The following is translated from Dutch) An initialization routine of the dynamic link library (dll-file) has failed. Line 494 says this: LoadModule php5_module "C:/Users/School/Downloads/USBWebserver v8_en/php/php5apache2_2.dll" My first Apache installation (its service) is not running. The ports are different. And I also uninstalled the service (using the httpd.exe -k uninstall command); What can be the problem? Thanks for help.

    Read the article

  • FTP upload stalls at same point every time on FileZilla

    - by John
    On two different FTP accounts, I am having problems uploading files. I can login and see the contents of the dir, and start an upload. Using Filezilla the transfer seems to always stall at either 0.9% or 1.2% (always those two numbers) and may simply hang, or keep restarting and then again stop at the same point. WindowsXP FTP is not great but I get similar types of problems there... it starts uploading and after a short while I get a timeout error. FTP used to work fine, and I don't know if it's these accounts in particular (both have the same service provider although purchased on opposite sides of the world) or if "FTP is broken on my PC"... can that even happen?!

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    Hi Geeks, I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • Mac dev folder missing, SSH not working

    - by SamGoody
    A few days ago, SSH stopped working. When I try logging in a get the following message: PTY allocation request failed on channel 0 stdin: is not a tty fatal: unrecognized command '' Connection to 74.52.61.194 closed. Web searches have shown me that there might be something wrong with /dev/std. But my computer lacks a /dev/ drive. There is an Alias to /dev/ [hidden, but I've revealed hidden files to do this search], but when I try to open it I am told that it cannot find the folder it is aliasing. Now, many a web search tells me that without a dev folder, the computer doesn't work, but it does seem to work, except the SSH. Also, are there any tools that can save my SSH preferences so that I don't have to, each time, type out the username@adrees, password, path all of which are long and complex? Not looking for a Filezilla type client, there are many of those. Looking for a command line like putty, that lets me use bash on the remote client. Am on Macbook Pro, latest version of Tiger.

    Read the article

  • DIR-615 lose internet connection after 3 minutes

    - by Sirber
    I got a new DLink DIR-615 routeur. DSL model connects fine. Connected PCs connects to the internet fine (wireless, wired) fine too. After ~3 minutes, connected PCs cannot go to the internet. Web pages goes timeout, sometimes google talk stays on (working). From the router admin page, pings works correctly (on google.ca), so the connection is active. pc -- routeur -- internet: fail pc -- router: ok router -- internet: ok could it be firewall related? I've read there's a SPI firewall enabled.

    Read the article

  • DIR-615 lose internet connection after 3 minutes

    - by user31375
    I got a new DLink DIR-615 routeur. DSL model connects fine. Connected PCs connects to the internet fine (wireless, wired) fine too. After ~3 minutes, connected PCs cannot go to the internet. Web pages goes timeout, sometimes google talk stays on (working). From the router admin page, pings works correctly (on google.ca), so the connection is active. pc -- routeur -- internet: fail pc -- router: ok router -- internet: ok could it be firewall related? I've read there's a SPI firewall enabled.

    Read the article

  • Problem with running a script at startup as root?

    - by Usman Ajmal
    Hi The main question: Is there a way I can run 'completely' one of my script when ubuntu's desktop appears no matter if root , administrator, desktop user or an unprivileged user logged in? What does the script do? The script mounts a partition, looks for a file in that partition and finally on the basis of that file a decision of copying a partition to another partition is made. That copying is done via dd if=/dev/sda2 of=/dev/sda5 When does the script run finely? Script runs smoothly when I run it from the terminal by sudo ./my_copying_script This command asks me for the password of currently logged in user. I enter the password and the script starts working. When does the script NOT run finely? I want to run the script at startup. I set it a startup program by using the Startup Applications utility of Ubuntu. Script ran at startup but exited at the dd command returing following error: dd: opening '/dev/sda2': Permission denied On edk's suggestion I set the owner of my_copying_script as root and set the SUID. Now the permissions of my_copying_script are (-rwsr-sr-x). edk's point of view was that once I set the suid, the startup program will run with the permissions of its owner. I did that but the same /dev/sda2 permission denied error came up. I then prefixed the dd with sudo as mentioned below sudo dd if=/dev/sda2 of=/dev/sda5 but this returned following error: sudo: no tty present and no askpass program specified In other words the mounting failed. If I run the script using sudo ./myProgram i don't face this problem and the drive gets mounted successfully.

    Read the article

  • IBM Server searching for secondary server

    - by user1241438
    I just bought the following server IBM System x3950 Server, 4 x 3.0GHz Dual Core, 32GB, 6 x 73.4GB 10K SAS RAID, 256MB BBWC, 2x Power, CD-RW/DVD When i boot it up, it says "Searching for secondary server" and hangs their for almost 10 mins. After 10 mins, it says timeout on searching chassis 2. But after this it proceed to boot the OS properly. But my frustration, i need to wait for almost 15 mins to boot everytime. How do i prevent this error message.

    Read the article

  • Xen 4.1 host (dom0) with blktap disks ("tap:aio:") not connecting

    - by Manwe
    Problem using blktap with xen-4.1 running Ubuntu Precise stock kernel with dom0 xen-4.1. I get: [ 5.580106] XENBUS: Waiting for devices to initialise: 295s...290s. ... [ 300.580288] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) And some syslog lines: May 17 13:07:30 localhost logger: /etc/xen/scripts/blktap: add XENBUS_PATH=backend/tap/10/51713 May 17 13:07:31 localhost logger: /etc/xen/scripts/blktap: Writing backend/tap/10/51713/hotplug-status connected to xenstore. with tap:aio: disk lines. file:/ works. disk = [ 'tap:aio:/data/root.img,xvda1,w', ] Problem exists with lucid and precises domU kernels and both guests work in Ubuntu hardy dom0 Host 64bit 2.6.24-28-xen xen-3.3 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise

    Read the article

  • Multiple logins with pam_mount means multiple (redundant) mounts ...

    - by Jamie
    I've configured pam_mount.so to automagically mount a cifs share when users login; the problem is if a user logs into multiple times simultaneously, the mount command is repeated multiple times. This so far isn't a problem but it's messy when you look at the output of a mount command. # mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) I'm assuming I need to fiddle with either the pam.d/common-auth file or pam_mount.conf.xml to accomplish this. How can I instruct pam_mount.so to avoid duplicate mountings?

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • dnsmasq Client TTL

    - by user548971
    I have a situation where my hosts file is constantly changing. Because of this I don't want clients to cache ip addresses resolved using the hosts file. Here is the command that starts dnsmasq for me: /usr/sbin/dnsmasq -K -R -y -Z -b -E -S 8.8.8.8 -l /tmp/dhcp.leases -r /tmp/resolv.conf.auto --stop-dns-rebind --rebind-localhost-ok --dhcp-range=lan,192.168.2.2,192.168.2.249,255.255.255.0,12h -2 eth0 In looking at this site: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html I see that the -T option has this description: -T, --local-ttl=<time> When replying with information from /etc/hosts or the DHCP leases file dnsmasq by default sets the time-to-live field to zero, meaning that the requester should not itself cache the information. This is the correct thing to do in almost all situations. This option allows a time-to-live (in seconds) to be given for these replies. This will reduce the load on the server at the expense of clients using stale data under some circumstances. My command doesn't have the -T option. Do I need it or does dnsmasq default TTL to zero without it?

    Read the article

  • Sending mails via Mutt and Gmail: Duplicates

    - by Chris
    I'm trying to setup mutt wiht gmail for the first time. It seems to work pretty well, however when I send a mail from Mutt i appears twice in Gmail's sent folder. (I assume it's also sent twice - I'm trying to validate that) My configuration (Stripped of coloring): # A basic .muttrc for use with Gmail # Change the following six lines to match your Gmail account details set imap_user = "XX" set smtp_url = "[email protected]@smtp.gmail.com:587/" set from = "XX" set realname = "XX" # Change the following line to a different editor you prefer. set editor = "vim" # Basic config, you can leave this as is set folder = "imaps://imap.gmail.com:993" set spoolfile = "+INBOX" set imap_check_subscribed set hostname = gmail.com set mail_check = 120 set timeout = 300 set imap_keepalive = 300 set postponed = "+[Gmail]/Drafts" set record = "+[Gmail]/Sent Mail" set header_cache=~/.mutt/cache/headers set message_cachedir=~/.mutt/cache/bodies set certificate_file=~/.mutt/certificates set move = no set include set sort = 'threads' set sort_aux = 'reverse-last-date-received' set auto_tag = yes hdr_order Date From To Cc auto_view text/html bind editor <Tab> complete-query bind editor ^T complete bind editor <space> noop # Gmail-style keyboard shortcuts macro index,pager y "<enter-command>unset trash\n <delete-message>" "Gmail archive message" macro index,pager d "<enter-command>set trash=\"imaps://imap.googlemail.com/[Gmail]/Bin\"\n <delete-message>" "Gmail delete message" macro index,pager gl "<change-folder>" macro index,pager gi "<change-folder>=INBOX<enter>" "Go to inbox" macro index,pager ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail" macro index,pager gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages" macro index,pager gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts" macro index,pager gt "<change-folder>=[Gmail]/Sent Mail<enter>" "Go to sent mail" #Don't prompt on exit set quit=yes ## ================= #Color definitions ## ================= set pgp_autosign

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • How do I get started with the M-Project is a Mobile HTML5 JavaScript Framework on Windows?

    - by Bruce Whealton
    This website for this great tool, call the M-Project says that I will need to add a doskey like this: doskey espresso=node C:\Path\To\Espresso\bin\espresso.js $1 $2 $3 $4 (It is a tool for creating Native mobile apps with the Phonegap/Cordova library, and it seems to be something that would be very helpful in this process). If I enter that at a command prompt in Windows 7 or 8, it's not going to stick around or persist. Is it an Environment Variable? Then it says at this page: http://www.the-m-project.org/ that it will work with Windows with some additional tools installed. The next line says that Node.js is needed, so I don't know if that is the additional tools mentioned above. Also, in an old discussion I read that one could just install cygwin. What would that do? It doesn't actually install any of the Linux distributions. I did install Ubuntu 12.04 server with VirtualBox because I thought it would be good to learn more about using Linux as I manage websites that are on a dedicated host. Anyway, the suggestion to install cygwin did not go into any details... I guess it would allow one to create a bash profile?? which would only work in a cygwin Command Line Window. Is that right? Isn't there a similar file that one could use in Windows or an Environment Variable that one could set to be able to achieve the same result? Thanks, Bruce

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >