Search Results

Search found 698 results on 28 pages for 'rsync'.

Page 15/28 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Howto get exit code of a script started in screen session

    - by Bettina
    Hi folks, I am currently creating a backup script which uses screen to start a backup job with rsync inside a screen session. The backup jobs are started as follows. screen -dmS backup /usr/bin/rsync ... As soon as the rsync job is finished, the screen session is terminated automatically. To make sure, that the backup was successful, I would like to check the exit code of the rsync job but unfortunately I really don't know how to get the exit code after the screen was terminated. Does someone have a good idea how to automatically check, if the rsync job was successful or not? Would be great if someone does. I already thought about using a temp file but like this: screen -dmS myScreen "rsync -av ... ; echo $? /tmp/myExitCode" but this unfortunately does not work. Then I thought about using stderr like in the example below: screen -dmS myScreen "rsync -av ... 2 /tmp/rsync-sterr None of my ideas worked out so far, since stderr is not written when I use the command above. :-( ? Would be great if someone has a good idea or even a solution. Cheers, Bettina

    Read the article

  • s3fs: how to force remount on errors?

    - by Alexander Gladysh
    I use s3fs 1.33 on Ubuntu 9.10. Regularily it gives me errors like this: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: close failed on "/mnt/s3/mybucket/filename": Software caused connection abort (103) rsync error: error in file IO (code 11) at receiver.c(731) [receiver=3.0.6] rsync: connection unexpectedly closed (86 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6] Any attempt to work with mounted directory after that gives this error: Transport endpoint is not connected To get rid of this, I have to remount. Is there a way to force a remount automatically?

    Read the article

  • Is it possible to rsync your web site to another backup server and use the same .htaccess files?

    - by stephenmm
    I am trying to use rsync to replicate all the files from one web server to another server that could act as a backup if the first one went down. The problem I am having is that the .htaccess file requires the AuthUserFile to have the fully quallified path to the .htpasswd file and I cannot make the paths the same on the two machines. Does anyone know how I might use the same .htaccess file on two different servers? Thanks for any help that can be provided.

    Read the article

  • How to make ssh/rsync/etc use a VLAN network interface?

    - by Annan
    A company I work for has a number of virtual servers with ElasticHosts. They are setup in such a way that eth1 is on a private VLAN connecting them to each other. This is so backups sent between servers are not charged at the same rate as external data transfer. My understanding of how VLANs and network interfaces work is sketchy at best. How can I make ssh, rsync, etc. transfer data through the VLAN? My final solution: I spent a while trying to figure this out, For all servers involved, edit /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=static ONBOOT=yes HWADDR=YOUR_MAC_ADDR IPADDR=192.168.0.100 NETMASK=255.255.255.0 Where HWADDR should already be set and the last octate of IPADDR should be different from each other. Then run, on all servers /etc/init.d/network restart After this the IP addresses specified by IPADDR can be used directly as any other IP address.

    Read the article

  • BackupPC - are full backups really full when using rsync?

    - by mhost
    Hi, When you run a full backup in backuppc and you use rsync as the transfer method, does it actually transfer the full backup source? Or does it only transfer the changes? The docs seem to imply that it would transfer the full thing and only an incremental would transfer the changes. If this is the case, could I simply use incrementals only, and never do a full backup? The way the backups are stored (using hard links to make each incremental appear full), I would think that this would be the best method. Incrementals will only transfer the changes, yet each backup will appear full. Thanks.

    Read the article

  • Slow Transfer Speeds from KVM host to client

    - by indian maiden
    I am trying to isolate the root cause of slow transfer speeds from my host OS to a KVM client. Both are Linux. Rsync on the host 192.168.1.72 rsync -auv --progress rut3.img /tmp/ [54.09MB/s] Rsync to the client: rsync -auv --progress rut3.img 192.168.1.80:/tmp/ [25.52MB/s] I realize that there will be some TCP overhead on the transfer but over 50%? Can someone enlighten me on what could be slowing down the transfers so much?

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Programmatically syncing with remote servers

    - by Joseph
    My application generates text files that need to be synced with remote servers, which may be windows or linux. Sync has to happen without user's intervention. I tried with rsync but windows doesn't come with rsync by default. Also it is not possible to supply password in the command line for rsync. Currently I'm going with ftp. But that seems like an inefficient way. Is there a way to rsync without user intervention? What are the ways to sync with a remote server programmatically? App is on nodejs.

    Read the article

  • Get Zipped Logs from a Remote Server

    - by Jonathan
    I am tasked with trying to find a way to download zipped logs from a remote server. There are quite a bit of these logs and they are constantly created. I do have limited ssh access to the remote server and can scp or rsync the files. However, due to the sheer size of these logs file, I do not want to rsync all of them. The logs could get to terabytes and for rsync to compare them may take some time. I only want to get any new file that was created/last updated an hour ago. I also am worried that I will rsync logs that are in the process of being created, so I was thinking to only rsync files that were last modified 3-5 minutes ago. Would anyone be so kind as to help me with such a process? Thank you in advance.

    Read the article

  • NTFS disk mounted as fuseblk in ubuntu 12.10 is very slow and a lot of errors when rsync. Is that not a rare thing?

    - by Pablo Marin-Garcia
    I am having problems with a NTFS disk mounted as a fuseblk in my ubuntu 12.10 through external usb3. When I did a 1.1TB backup with rsync the speed was 1-2MB/s (wiht a ext4 disk speed was 70 MB/s before and after trying the NTFS disk). Also after one hour errors started to appear: rsync: write failed on "xxx": No such file or directory recv_files: "yyy" is a directory #but this file is a FILE not a dir ??!! .... As this is the first time I have mounted the NTFS in linux for heavy usage (the data would be used in windows afterwards), I would like to know if this kind of thinks are common o was only that something became unstable in my system and a simply restart would probably have solved it. This leads me to the these questions: Can I trust fuse for manage NTFS disks? Or is a problem of the NTFS tools in linux not yet totally stables for writing? Do people is still suffering from low performance with fuse-NTFS vs ext4 (in the past I have read about people complaining about this)?

    Read the article

  • Splitting string into array upon token

    - by Gnutt
    I'm writing a script to perform an offsite rsync backup, and whenever the rsyncline recieves some output it goes into a single variable. I then want to split that variable into an array upon the ^M token, so that I can send them to two different logger-sessions (so I get them on seperate lines in the log). My current line to perform the rsync result=rsync --del -az -e "ssh -i $cert" $source $destination 2>&1 Result in the log, when the server is unavailable ssh: connect to host offsite port 22: Connection timed out^M rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7]

    Read the article

  • Why I cannot copy install.wim from Windows 7 ISO to USB (in linux env)

    - by fastreload
    I need to make a USB bootable disk of Windows 7 ISO. My USB is formatted to NTFS, ISO is not corrupt. I can copy install.wim elsewhere but I cannot copy it to USB. I even tried rsync. rsync error sources/install.wim rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "/media/52E866F5450158A4/sources/install.wim": Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.8] Stat for windows.vim File: `X15-65732 (2)/sources/install.wim' Size: 2188587580 Blocks: 4274600 IO Block: 4096 regular file Device: 801h/2049d Inode: 671984 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ umur) Gid: ( 1000/ umur) Access: 2011-10-17 22:59:54.754619736 +0300 Modify: 2009-07-14 12:26:40.000000000 +0300 Change: 2011-10-17 22:55:47.327358410 +0300 fdisk -l Disk /dev/sdd: 8103 MB, 8103395328 bytes 196 heads, 32 sectors/track, 2523 cylinders, total 15826944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdd1 * 32 15826943 7913456 7 HPFS/NTFS/exFAT hdparm -I /dev/sdd: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Model Number: UF?F?A????U]r???U u??tF?f?`~ Serial Number: ?@??~| Firmware Revision: ????V? Media Serial Num: $I?vnladip raititnot baelErrrol aoidgn Media Manufacturer: o eparitgns syetmiM Standards: Used: unknown (minor revision code 0x0c75) Supported: 12 8 6 Likely used: 12 Configuration: Logical max current cylinders 17218 0 heads 0 0 sectors/track 128 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY(may be)(cannot be disabled) Queue depth: 11 Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 0 Current = ? Recommended acoustic management value: 254, current value: 62 DMA: not supported PIO: unknown * reserved 69[0] * reserved 69[1] * reserved 69[3] * reserved 69[4] * reserved 69[7] Security: Master password revision code = 60253 not supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 71112min for SECURITY ERASE UNIT. 172min for ENHANCED SECURITY ERASE UNIT. Integrity word not set (found 0xaa55, expected 0x80a5)

    Read the article

  • Why does this rsnapshot exclude not work?

    - by bstpierre
    Rsnapshot passes excludes directly to rsync, but rsync's behavior appears inconsistent. I've simplified my rsnapshot backup test to the following directory tree (this tree will be backed up): gorilla:~# find /tmp/snaptest -exec file {} \; /tmp/snaptest: directory /tmp/snaptest/SKIPTHIS: directory /tmp/snaptest/SKIPTHIS/xyz: directory /tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/snaptest/SKIPTHIS.txt: ASCII text My config file: config_version 1.2 snapshot_root /tmp/backup-media no_create_root 1 cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_ssh /usr/bin/ssh cmd_logger /usr/bin/logger cmd_du /usr/bin/du interval hourly 6 interval daily 7 interval weekly 4 interval monthly 3 verbose 3 loglevel 3 logfile /media/maxtor-one-touch/rsnapshot.log lockfile /media/maxtor-one-touch/backups/.rsnapshot.pid rsync_short_args -a rsync_long_args --delete --numeric-ids --relative --delete-excluded exclude "SKIPTHIS/**" link_dest 1 backup /tmp/snaptest snaptest The result: gorilla:~# rsnapshot -c /tmp/snaptest.conf hourly echo 12638 > /media/maxtor-one-touch/backups/.rsnapshot.pid mkdir -m 0755 -p /tmp/backup-media/hourly.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --exclude="SKIPTHIS/**" /tmp/snaptest \ /tmp/backup-media/hourly.0/snaptest touch /tmp/backup-media/hourly.0/ rm -f /media/maxtor-one-touch/backups/.rsnapshot.pid gorilla:~# find /tmp/backup-media/ -exec file {} \; /tmp/backup-media/: directory /tmp/backup-media/hourly.0: directory /tmp/backup-media/hourly.0/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp: sticky directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS.txt: ASCII text My confusion stems from the fact that if I copy-paste the rsync command echoed by rsnapshot, the SKIPTHIS directory is excluded! (I've tested with various other SKIPTHIS patterns with the same results.) Any idea what's going on?

    Read the article

  • Debian Server wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Ubuntu 8.04 wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • How to Back Up Ubuntu the Easy Way with Déjà Dup

    - by Chris Hoffman
    Déjà Dup is a simple — yet powerful — backup tool included with Ubuntu. It offers the power of rsync with incremental backups, encryption, scheduling, and support for remote services. With Déjà Dup, you can quickly revert files to previous versions or restore missing files from a file manager window. It’s a graphical frontend to Duplicity, which itself uses rsync. It offers the power of rsync with a simple interface. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Lubuntu: neither shut-down nor restart works

    - by Rantanplan
    aI have a freshly installed Lubuntu 14.04.1 (installed with forcepae option on a laptop with Pentium M processor). The only problem that I have found so far is that I cannot shut-down or restart the laptop. It always continues showing "Lubuntu" and some dots. Pressing Esc it says wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] * Killing all remaining processes… [fail] ModemManager [597] : <info> Caught signal, shutting down… ModemManager [597] : <info> ModemManager is shut down nm-dispatcher.action: Could not get the system bus. Make sure the message bus daemon is running! Message: Did not receive a reply. Possible causes: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. * Deactivating swap… [OK] * Will now halt The cursor remains blinking but the only way to switch it off is to hold the power-off key pressed for some seconds. I tried sudo shutdown -h now, sudo halt and sudo poweroff resulting in the same problem. I also tried to add acpi=force in GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" in /etc/default/grub and run sudo update-grub; then, using the taskbar's shot-down button lead to a direct stop of the laptop equal to holding the power-off key pressed for some seconds. Next I followed the answer http://askubuntu.com/a/202481/288322. Now, I directly receive some messages during shut-down starting wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] [ 240.944277] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 240.944461] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 240.944623] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. followed by some more similar lines and then: * Killing all remaining processes… [fail] ModemManager [576] : <info> Caught signal, shutting down… nm-dispatcher.action: Caught signal 15, shutting down... ModemManager [576] : <info> ModemManager is shut down * Deactivating swap… [OK] * Will now halt [ 600.944276] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 600.944458] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 600.944619] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. Then, nothing more was coming during the next 5 minutes. If you know where can I find relevant error information, I will be happy to search for them.

    Read the article

  • Change to different user, or let different user execute a command

    - by WG-
    I have a problem. There is a server which I can access with an account by ssh, lets say WG. Now there is a folder with the following permissions. drwxr-s---+ 855 vvz www-data 20K Aug 21 17:56 pictures I want to copy this folder using rsync, however since I am not the user www-data but WG I cannot execute rsync. So I want www-data to execute a rsync command. However, I do not posses sudo powers. My friend however tells me that I am actually able to execute the rsync command as www-data, but he will not tell me how. I asked him for some clues and he told me that it had something to do with reverse shell (which I figured out to be that you connect by ssh to your server and then you connect back to your own server, or something). I also asked if it was by-design or actually a flaw in the system. He tells me it is both. Furthermore I think it has something to do with the group permissions. If I just make sure that I am with the group permissions then I can also read the files. Anybody has a clue?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >