Search Results

Search found 31372 results on 1255 pages for 'ubuntu beginner'.

Page 478/1255 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • Graceful logout in dwm

    - by Riateche
    I want dwm to close all windows gracefully when I press quit hotkey. I like Unity behaviour: it displays list of windows denying logout (for example, editors with unsaved changes) and do not logout before all issues are resolved and applications are closed. By default, dwm just end X session and all running applications are killed. I was thinking about writing a script that will retrieve list of all windows, gracefully close them and wait for their processes to finish. But I even don't know how to close windows. The only way I know is using wmctrl, and this utility doesn't work with dwm.

    Read the article

  • LVM Extend... not sure the filesystem

    - by Dan
    I would like to extend my LVM partition. First I did lvextend -L +100G /dev/server/home Now I still have to extend the filesystem. The tutorials tell me to use resize2fs, but that only works for ext2 and ext3. I'm not even sure what filesystem I have... fdisk /dev/server/home/ doesn't work... how do I know what kind of filesystem I have on my lvm partition?

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Radeon HD4850 card, ubuntu 12.04.1 LTS Installed FGLRX drivers, but still running vesa, how to solve it?

    - by user113416
    I have ati HD4850 card, ubuntu 12.04.1LTS. To begin with, after fresh instll, i installed fglrx drvers from system -- additional drivers, according to fglrxifo everything was alright, but the system was running on vesa:sem. After that, i have reinstalled and installed drivers according to many tutorials, but stille got that problem. one of the tutorial was here: What is the correct way to install ATI Catalyst Video Drivers (fglrx)? of course, i attempted to install 12.6 driver Now i have a fresh install of ubuntu and don't want to touch anything without anyone's support, because my 3 days nightmare didn't give results. what must i do to have an adequate performance of video card? Thanks in advace.

    Read the article

  • delete multiple files on linux with spaces in file names

    - by raido
    I have a directory on my Linux box with over 10000 files that I have to delete. Running... sudo rm -rf /var/tmp/* Gives the error message... sudo: unable to execute /bin/rm: Argument list too long The solution to this is to run sudo find /var/tmp | xargs sudo rm This only works for files with no spaces in the file name. However, some of the files have names with spaces in them and they are not deleted. For example, if a file is named 'A File With Spaces in the Name.dat', Running the command gives me errors like this.... rm: cannot remove `/var/tmp/A': No such file or directory rm: cannot remove `File': No such file or directory rm: cannot remove `With': No such file or directory rm: cannot remove `Spaces': No such file or directory rm: cannot remove `in': No such file or directory rm: cannot remove `the': No such file or directory rm: cannot remove `Name.dat': No such file or directory How do I pass the complete file path to xargs sudo rm without breaking up the file name.

    Read the article

  • map linux drives to windwos 7 for media stream over internet

    - by Ortix92
    I'm trying to map a linux network drive to my windows 7 laptop, however this laptop is not on LAN. At home, I simply use Samba, but this obviously won't work over the internet. I'm trying to avoid VPN, so if there are other solutions, I would like to know about them. The reason I ask is because my university does this as well. We can simply map folders to our computers without VPN connections. I'm not sure what they are running as servers. The main reason is because I want to be able to access my files stored on my home server wherever I go. They are located in the /home/ folder (videos, music and pictures folder). I'm trying to keep my websites and media separate from each other. I wouldn't mind accessing them from a web interface either, but I would like to keep the directory structure intact. I remember having an app like that come with winamp and running it on my windows pc (As the server). Unfortunately it doesn't work for linux. Any ideas on what I could use? Would XBMC be able to help me out with this? I did do some researching but I couldn't find any concrete answers

    Read the article

  • Failing SSHFS connection drags down the system

    - by skerit
    From time to time my sshfs mount fails. All programs using the mount freeze when it happens. I can't even ls anything or use nautilus. Is there a way to find out what's the cause and how to handle it? I've noticed regular SSH sessions to the server get their fair share of Write failed: broken pipe disconnects, too. If I wait long enough (and I'm talking about 20-ish minutes, here) it will auto reconnect and things start working again.

    Read the article

  • How do I remove partitions in the 'Places' section of the Gnome file dialog?

    - by Grumbel
    In the Gnome file dialog under 'Places' I can add and edit bookmarks to directories. That list however not only contains my bookmarks, but also a list of all partitions on the system that Gnome seems to gather automatically. I can't edit that list as the right-click-menu items for that are greyed out. How can I get rid of those automatically generated entries or limit it to just my bookmarks?

    Read the article

  • Network Bridging on Linux for OpenVPN

    - by Coyote
    I've been following all the OpenVPN Bridge tutorials I can, but I'm still missing something. Does anyone know of a super detailed tutorial\explanation of bridging? If anyone has bridging running, can I get a copy of your interfaces file to see how you've got it going. (Obviously change the ip address, just please change them consistently.)

    Read the article

  • ubuntu 12.04 can't find root partition (it doesn't look for btrfs partitions) end up with kernel-panic [closed]

    - by zalesz
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? I'm running Ubuntu 12.04 from kernel v. 3.2.0-17 with all partitions formatted as BTRFS. It was everything ok till kernel 3.2.0-18/19. Now system don't load, after trying to run it with recovery there is a msg that kernel panic occurred cause there is no partition with ext3/4 and some other partitions but I don't see any btrfs alike type. Any ideas how to fix it? Best

    Read the article

  • Stop sending packets to private IPs

    - by SlasherZ
    I have a problem that my server got locked down because it was sending packets to private IPs. My question is, what is the best solution to stop that? Here is the log that I got from my hosting provider: [Mon Jun 2 00:04:36 2014] forward-to-private:IN=br0 OUT=br0 PHYSIN=vm-44487.0 PHYSOUT=eth0 MAC=78:fe:3d:47:3d:20:00:1c:14:01:4e:cd:08:00 SRC=78.46.198.21 DST=192.168.249.128 LEN=1454 TOS=0x00 PREC=0x00 TTL=64 ID=58859 DF PROTO=UDP SPT=41366 DPT=41234 LEN=1434 [Mon Jun 2 00:17:15 2014] forward-to-private:IN=br0 OUT=br0 PHYSIN=vm-44487.0 PHYSOUT=eth0 MAC=78:fe:3d:47:3d:20:00:1c:14:01:4e:cd:08:00 SRC=78.46.198.21 DST=192.168.249.128 LEN=1456 TOS=0x00 PREC=0x00 TTL=64 ID=52234 DF PROTO=UDP SPT=55430 DPT=41234 LEN=1436

    Read the article

  • How can I view my video card's temperature via SSH?

    - by NT3RP
    I've finally managed to set up my two ATI Radeon 6950 video cards in my machine, but the cards can get quite hot. Based on the arrangement of my apartment, I want to be able to SSH into the machine an execute a command to find out the temperature. What I have tried so far is this... export DISPLAY=:0.0 sudo aticonfig --adapter=0 --od-gettemperature However, when I do that via SSH, I get the following error: ERROR - X needs to be running to perform ATI Overdrive(TM) commands If I turn on X forwarding when I remote into the machine, then it just seems to affect my local machine instead of the remote machine. Am I doing this correctly? Is there a better way to monitor my video card's temperature?

    Read the article

  • Enter response once prompt returns?

    - by mjb
    It's neither a secure idea nor one I'd recommend elsewhere, but I have a situation when occasionally it takes a while for my Ansible ad-hoc command to respond. I'd love to pipe or args or whatever is needed to push the required text into the prompt so I can walk away and know it will finish. Ex: $ ansible all -m shell -a "reboot" --ask-pass Password: blah blah blah it worked I'd love to send an argument or << or something to get the password in. Is that possible?

    Read the article

  • Fresh install of nginx causes browser to download index.html instead of opening it

    - by 010110110101
    When I view this in Chrome, http://localhost:90 the file is downloaded instead of displayed in Chrome. This question has been asked a lot of times on SO, but about index.php files. My problem is a plain jane HTML file, not a PHP file. That hasn't been asked yet. I was hoping the solution would be similar, but I haven't been able to figure it out. Here's my example.com.conf: server { server_name localhost; listen 90; root /var/www/example.com/html index index.html location / { try_file $uri $uri/ =404; } } My index.html file contains only two words, no markup Hello World I think it's the mime.types. The mime.types file has the entry for html in it. This is a fresh nginx install. nginx -t reports "test is successful"

    Read the article

  • Running 'dd' command at startup?

    - by Usman Ajmal
    Hi, I have set a script to run at Linux startup. The script contains a following line of code dd if=/dev/sda2 of=/dev/sda5 ?> result.txt Now, when my Linux Desktop appear, result.txt contain dd: opening '/dev/sda2': Permission denied If I prefix the dd command with sudo as: sudo dd if=/dev/sda2 of=/dev/sda5 ?> result.txt the result.txt contains sudo: no tty present and no askpass program specified Is there a way I can get around this problem? What I want is to copy 2nd parititon to 5th when a user logs in no matter if he is root, admin, Desktop or an unprivileged user. Thanks a lot as always.

    Read the article

  • Desktop directory disappears in gnome-terminal, then appears again, but all files in it are deleted

    - by Ingen
    I am able to see my Desktop and with all its various links and files. But in the terminal when I try to access the Desktop directory: cd ~/Desktop I get: bash: cd: /home/administrator/Desktop: No such file or directory Then I find I am unable to access any of the files on the Desktop when I click on them although the file icons are there. Then the icons disappear after my clicking on them. Then I am able to access the Desktop directory in the terminal but the directory is empty i.e. all the files/folders have been deleted. What's going on? How can I fix this?

    Read the article

  • apt-get install not working in script

    - by isoman
    I create a small script that installs a set of linux paquets . Strangely apt-get install always fails and tells me that the package have not been found. Here is my script: #! /bin/bash sudo apt-get install python-software-properties sudo apt-get update sudo add-apt-repository ppa:pitti/postgresql sudo apt-get install xfce4 postgresql-9.0 pgadmin3 chromium-browser wine iftop What can i do to fix this ? Thanks .

    Read the article

  • hi, i have a problem with windows 7 so I use ubuntu to bring back my files , but when I tried to open the files I have this message:

    - by user286972
    Error mounting /dev/sda1 at /media/ubuntu/B800C0C300C08A38: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=999,gid=999,dmask=0077,fmask=0177" "/dev/sda1" "/media/ubuntu/B800C0C300C08A38"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.

    Read the article

  • Grub hangs intermittently on "Starting Up..."

    - by Griffo
    Hi all, I've had this problem for a while now. My linux server is set to wake-on-lan but occasionally it halts at Grubs "Starting Up..." and goes no further. This is not due to additional hardware being attached such as a flash drive or anything as I never plug anything into it. It may boot perfectly 40 times in a row and then hit this issue. Sometimes it gets the issue a couple of times in quick succession and doesn't happen for ages again. I'm not sure how to diagnose it since it doesn't seem to be reproducible. Any help much appreciated. Thanks

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >