Search Results

Search found 7823 results on 313 pages for 'leaf dev'.

Page 161/313 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Windows XP Professional Version 2002 SP3 restarts when I try to Hibernate

    - by dario_ramos
    I googled this and tried some solutions, but nothing seems to work. In the past, Hibernate worked fine. Someone told me that this can be caused by specific hardware, and I have lots of that, because this is a dev machine which we use to develop modules that interact with lots of different hardware. Moreover, this machine also uses RAID, which might have something to do. Is there any way to troubleshoot this by looking at some log file or using some tool?

    Read the article

  • Use bootable vhd with Virtualbox

    - by user18151
    Hello, I want to create a bootable Windows 7 vhd using the steps mentioned at: http://www.microsoft.com/downloads/details.aspx?FamilyID=80ede31d-3509-407b-a896-0beea8705589&displaylang=en However, I wanted to know if I will be able to access the vhd using Virtualbox too. I intend to install VS2008 in the VM and use it in Virtualbox when doing quick work and on native hardware when doing a lot of work. I don't want to mess up my actual Win7 installation with VS2008 dev work.

    Read the article

  • IIS7 - Basic Authentication Module missing?

    - by FlySwat
    I'd like to use basic HTTP authentication to keep people out of our dev site instance since it is unfortantly exposed to the wild internet. However, in IIS7, the only authentication modes listed are Forms, Anonymous and Impersonation. Where did the "Basic Authentication" module go, and how can I get it back?

    Read the article

  • What is the garbage text that is being printed by wvdial in terminal?

    - by Hrishi
    When I dial using wvdial, sometimes it prints some garbage text into the terminal. This is not happening every time, but in the garbage text I can see some readable strings which is often irc logs(from xchat) or GET requests from the browser. One of my friend told me that this is probably something it's reading from /dev/random for Random entropy, but I couldn't find any supporting information. What is this text, and why is it being printed to the terminal? See the below picture for an example:

    Read the article

  • http benchmarking?

    - by Sam Williams
    im running varnish-nginx(php-fpm) and im using ab but it keeps messing up. [root@localhost src]# ab -k -n 100000 -c 750 http://192.168.135.12/index.php This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking 192.168.135.12 (be patient) apr_socket_recv: Connection reset by peer (104) is there anything else i can use? or am i doing it wrong?

    Read the article

  • Advice on moving a machine room to a new location?

    - by MikeJ
    Our company is moving to new offices in a couple of months, and I am responsible for looking after the move of the development servers in the company. most of the dev equipment is in 5, 42U cabinets + rack for switching/routing equipment. How do most people do this sort of thing? Move the cabinent whole or extract the indvidual components and move the racks empty. any advise on prep and shutdown before the move would be welcome

    Read the article

  • Mount Windows share on Linux boot

    - by Delameko
    I'm running VirtualBox in Windows. I have Linux 10.04 installed as a VM. Whenever I log in I have to run to following command to mount my shared Windows web dev folder: sudo mount.vboxsf web_apps /mnt/web_apps Where can I put this line (minus the sudo) so that it will run once when Linux boots up? I'm guessing there must be a root .profile or .login script that runs at some point?

    Read the article

  • RHEL json gem installation requires make?

    - by Salahuddin559
    When I try to install json gem (gem install json), at first it fails to do so, because of some dev package issue. After fixing it, it fails saying that "sh: make: command not found" and "ERROR: Error installing json: ERROR: Failed to build gem native extension.". Why is it failing on make? Notice this is not Mac, this is in RHEL 5 (4 or 5, not sure). Why is it not able to do some "build gem native extension"?

    Read the article

  • Shouldn't emesene themes change the emesene indicator icon?

    - by D Connors
    As it stands right now, I can't get the emesene icon in the indicator applet (or notification area, or whatever it's named) to change. No matter what icon theme I choose, even installing new themes, the icon is always the default one. Is that a standard behavior, or am I getting a bug? I'm running the latest version of emesene from their ppa (1.6-dev), on Lucid Lynx.

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • Cross-platform restart of Apache

    - by l0b0
    I'd like to have a single command that'll restart Apache on any *nix OS. Currently I'm working with Ubuntu, which has /usr/sbin/apache2ctl /usr/sbin/service no apachectl no httpd and Scientific Linux CERN 5, which has /usr/sbin/apachectl /etc/init.d/httpd no apache2ctl no service I'd like to avoid using a hack like which service 2>/dev/null || which /etc/init.d/httpd

    Read the article

  • Routing and arp

    - by adolgarev
    If I have >ip ro 192.168.14.0/24 dev eth0 another host can obtain my mac address. But if I flush routing info: >ip ro flush table main arp resolution doesn't work. Broadcast packets "Who has 192.168.14.149" reach eth0 but OS (Linux) doesn't respond despite eth0 has address 192.168.14.149. What connection exists between routing and arp resolution?

    Read the article

  • Why is pdf format used?

    - by dan_vitch
    I will admit that I am new to the tech/dev field. It seems to become a trend that every time I have to work with pdfs a part of me dies. Why is this format as ubiquitous as it seems to be? Is it just non-tech people that prefer pdfs?

    Read the article

  • How to tune TCP TIME_WAIT timeout on Solaris?

    - by Hongli Lai
    I'm trying to change the TCP TIME_WAIT timeout on Solaris. According to some Google results I need to run this command: ndd -set /dev/tcp tcp_time_wait_interval 60000 However I get: operation failed: Not owner What am I doing wrong? I'm already running ndd as root. Is there another way to tune TIME_WAIT?

    Read the article

  • How to fix grub after moving root partition?

    - by Grzenio
    Hi, Because I am using one of the new WD disks I am trying to aling my root partition with the real sectors, as described here: http://community.wdc.com/t5/Desktop/Problem-with-WD-Advanced-Format-drive-in-LINUX-WD15EARS/m-p/10920#M631 So I copied all files to a temp location, deleted my partition (/dev/sda3), recreated it a few cylinders later (same name) and copied the files to the newly created partition. But now when I try to boot, I get my old grub menu but after selecting my kernel version it hangs... Any idea how I can fix it?

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Deny login from certain hosts if logging in with specific sql credentials

    - by Dave
    I want to stop some of our developers from connecting to the production sql server using a specific sql account. They have rights to connect through windows authentication with lower rights. They claim that changing the password will affect too many other processes running on our processing machine. So I want to deny access if they're connecting from there dev machines for now. Another way this would work is if I could just allow connections from one specific host.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >