Search Results

Search found 5597 results on 224 pages for 'sudo rm rf'.

Page 165/224 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Difference between tcp recv buffer and tcp receive window size?

    - by pradeepchhetri
    The command shows the tcp receive buffer size in bytes. $ cat /proc/sys/net/ipv4/tcp_rmem 4096 87380 4001344 where the three values signifies the min, default and max values respectively. Then I tried to find the tcp window size using tcpdump command. $ sudo tcpdump -n -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn and port 80 and host google.com' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:15:41.465037 IP 172.16.31.141.51614 > 74.125.236.73.80: Flags [S], seq 3661804272, win 14600, options [mss 1460,sackOK,TS val 4452053 ecr 0,nop,wscale 6], length 0 I got the window size to be 14600 which is 10 times the size of MSS. Can anyone please tell me the relationship between the two.

    Read the article

  • Can't star SSH on Ubuntu 12.10 AWS EC2

    - by Conor H
    So i've just started playing around with Ubuntu on Amazon EC2. I've just issued the following command to restart ssh but it has now "killed" ssh. sudo /etc/init.d/ssh restart I can't seem to ssh to this instance anymore. Putty just gives me "connection refused". NOTE: In this case I just restarted SSH to see the result. I didn't change any settings. This was to confirm that it was the restart command was the problem and not any configs I made. What is the correct way to restart SSH? P.S. That usually works on other Ubuntu boxes. Thanks. EDIT: It is also worth noting that when I ran that command I was taken straight back to a prompt. I didn't get any output on the console.

    Read the article

  • Linux WD30EZRX WD Green HDD & Blacx Duet 5G Usb

    - by Adam
    I have connected up an WD30EZRX WD Green HDD to a Thermaltake Blacx Duet 5G USB dock in Ubuntu 12.04. Every thing seems fine except when the HDD idles it seems to have error ls: reading directory .: Input/output error after a while and is only fixed by unmounting and remounting the drive as root. I have the following line in /etc/fstab UUID=AAF670E9F670B6E3 /media/3TB ntfs defaults,user,auto 0 0 I have noticed that it seems to go between /dev/sdc2 and /dev/sdd2 devices on remount. I did copy 1TB last night without issue in 1 sitting. But after x mins of idle it has remount issue. Any tips/suggestions on how to proceed would be appreciated. Spent most of the night googling and all its done is made me sad. Edit (tried as suggested): root@mediaserver:/media/3TB# sudo hdparm -B 255 -S 253 /dev/sdd2 /dev/sdd2: setting Advanced Power Management level to disabled HDIO_DRIVE_CMD failed: Input/output error setting standby to 253 (vendor-specific) APM_level = not supported Seems as if that didn't help with this particular drive.

    Read the article

  • Constantly diminishing free space on fedora 17

    - by Varun Madiath
    I don't know how to explain this other than to say that my computer seems to magically run out of free when it runs for a while. The output of df -h . oh my home direction is below /dev/mapper/vg_vmadiath--dev-lv_home 50G 47G 0 100% /home When I run sudo du -cks * | sort -rn | head -11 on /home I get the following output. I got this from decreasing free space on fedora 12 32744344 total 32744328 vmadiath 16 lost+found If I restart my system things seem to fix themselves and I'm left with about 20 or 25GB of free space. I'm running XFCE with XMonad as my window manager under fedora 17. Programs I'm running include the XFCE terminal, grep, find, firefox, eclipse, libre-office writer, zsh, emacs. Any help will be greatly appreciated. I'll gladly give you any other output you might need.

    Read the article

  • Connect to wired and wireless networks at same time, Ubuntu

    - by Gary Chambers
    Currently, I have a media PC running Ubuntu 10.04 that I am trying to connect via a wired network cable directly to a NAS box, and wirelessly to the router. This works no problem after I run sudo /etc/init.d/networking restart but I can't get both interfaces to come up on system startup. My /etc/network/interfaces file reads as follows: auto eth0 iface eth0 inet static address 10.0.1.2 netmask 255.255.254.0 broadcast 10.0.1.255 network 10.0.1.0 auto wlan2 iface wlan2 inet dhcp As I say, I know this works, because I can get it to work by restarting the network interfaces, but I can't bring them both up on system startup. Does anyone know why this might be?

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Trouble with NFS file sharing on Synology 211 NAS and Ubuntu Client

    - by Aglystas
    I'm attempting to set up NFS file sharing and keep getting the error mount.nfs: access denied by server while mounting 192.168.1.110:/myshared Here is the exact command I'm using to mount: sudo mount -o nolock 192.168.1.110:/myshared /home/emiller/MyShared I have set 'Enabled NFS' in DSM and set NFS priviledges in the the Shares section of the control panel. Here is the /etc/exports entry from the NAS: volume1/myshared 192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0) I read some things about the hosts.allow and hosts.deny but it seems like if they are empty they aren't used for anything. I can see the share when I run ... showmount -e 192.168.1.110 Any help would be appreciated in this matter.

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Unable to open Synaptic manager, unable to install packages, what can I do?

    - by Omkant
    I have installed the sun java6-jdk package but it's not installed completely. After that it's giving an error when I try to install a new package. In terminal when I write "sudo apt-get install [any package name]" I get an error like this: E: could not get lock /var/lib/dpkg/lock -open (11 Resource temporable ) E: Unable to lock the administration directory (/var/lib/dpkg). Also Synaptic package manager is not opening and none of the packages are downloading through any of the methods I know. Please help!

    Read the article

  • Removing perl module installed with cpanm

    - by ZeC
    I am Perl beginner. I wanted to use Log::Log4perl module as I am familiar how it works in Java. I used cpanm script to download module, but I ran it without "sudo". Then it installed this module to my dir /home/amer/perl5. Afterwards, I installed it as sudoer, but I want to remove installation in my home dir to avoid any conflicts in future. How can I do that? Here is my cmdline execution stack. Thanks and Regards!

    Read the article

  • Listing the routing table takes long time to complete

    - by Rafal Rawicki
    When I print routes defined on my computer using route, it takes about 5 to 20 seconds to complete. Why does it take so much time? With VPN enabled: $ time sudo route Kernel IP routing table (...) real 0m21.423s user 0m0.000s sys 0m0.012s With no VPN, this is about 5 seconds - still, computer can do a lot in this time. I've repeated my measurements few times, getting very similar results each try. My machine is Ubuntu with 3.0.0 kernel, but as far as I know, route on the other computers works the same way.

    Read the article

  • How do I unmount a tmpfs that is missing from /etc/mtab?

    - by vrinek
    I have the following line in /etc/fstab: none /home/hydra/tmp tmpfs user,noauto,size=1000M,uid=1001,gid=1001 0 0 I can do mount ~/tmp as user hydra and it gets mounted ok. The only problem is that even thought it gets added to /proc/mounts, it does not get added to /etc/mtab. When I try a umount ~/tmp (again as hydra) it complains: umount: /home/hydra/tmp is not mounted (according to mtab) And when I try -f or -n, it complains that I am not root. Some more info on the system that manifests this problem: On sudo umount /home/hydra/tmp, the fs gets unmounted (I think I needed to used -f too) Debian version is testing mount --version - mount from util-linux 2.19.1 (with libblkid and selinux support) ls -l /etc/mtab - -rw-r--r-- 1 root root 921 Nov 14 09:08 /etc/mtab cat /proc/mounts | grep rootfs - rootfs / rootfs rw 0 0 /home, /home/hydra nor /home/hydra/tmp are symbolic links

    Read the article

  • I run a command twice - I'm wondering if it'll be a problem

    - by Delirium tremens
    "In computing, tee is a command in various command-line interpreters (shells) such as Unix shells, 4DOS/4NT and Windows PowerShell, which displays or pipes the output of a command and copies it into a file or a variable. It is primarily used in conjunction with pipes and filters." I run echo FRAMEBUFFER=y | sudo tee /etc/initramfs-tools/conf.d/splash twice. I opened the conf.d folder in Nautilus, but there isn't a splash file nor directory. I expected a file to be there with FRAMEBUFFER=y inside but there isn't. Is this going to be a problem?

    Read the article

  • Installing Linux from External Card Reader

    - by Subhamoy Sengupta
    I have this problem. I was experimenting if I could use a memory card (SDHC) as an USB drive for all intents and purposes, and when I put the card in an USB card reader, I can use it just like regular USB stick and it also shows up in the BBS popup menu as an USB stick. When I tried to create an installation media out of it like this: sudo dd if=/path/to/image of=/dev/sdb And tried to boot from it, simply nothing happened. Cursor blinked a couple times, and jumped to the GRUB of my pre-existing GNU/Linux installation. What am I missing here? Is this not doable? I tried this with Xubuntu 12.04 and ArchLinux, by the way. I have also tried UNetBootIn instead of dd. Nothing happened differently.

    Read the article

  • VMWare tools on Ubuntu Server 10.10 kernel source problem

    - by Hamid Elaosta
    After install and running the vm-ware config, the config needs my kernel headers to compile some modules, ok, so I'll give it them, but it just won't work. It asks for the path of the directory of C header files that match my running kernel. If I uname -r I get 2.6.35-22-generic-pae So I tell it the source path is /lib/modules/2.6.25-22-generic-pae/build/include and it returns "The directory of kernel headers (version @@VMWARE@@ UTS_RELEASE) does not match your running kernel (version 2.6.35-22-generic-pae). ..I'm confused? can anyone offer suggestions please? I installed hte kernel source andh eaders myself using sudo apt-get install linux-headers-$(uname -r)

    Read the article

  • man: command not found in zsh (Mac OS 10.58)

    - by Oscar
    I changed to zsh from the default (by changing the "Shells open with" preference in Terminal to "command (complete path)" set to /bin/zsh While most things seem to work, I tried to see the man page for a command and got a "permission denied" message. When I tried sudo, I got "man: command not found". I changed to the default shell (/bin/tcsh), and this is what I get when I open a new shell: Last login: Fri Nov 18 13:53:50 on ttys000 Fri Nov 18 13:55:21 CST 2011 /usr/bin/manpath: Permission denied. If I try man, I get the same "command not found message". I guess there is something wrong in my PATH, but I have no idea how to fix it. "echo $PATH" (in tcsh) gets: /sw/bin:/sw/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/texbin In zsh, it gets: /usr/bin:/bin:/sw/bin:/usr/local/bin:/usr/local/teTeX/bin/powerpc-apple-darwin-current:/usr/sbin:/sbin:/usr/texbin:/usr/X11/bin Any ideas?

    Read the article

  • UFW: force traffic thru OpenVPN tunnel / do not leak any traffic

    - by hotzen
    I have VPN access using OpenVPN and try to create a safe machine that does not leak traffic over non-VPN interfaces. Using the firewall UFW I try to achieve the following: Allow Access from LAN to the machine's web-interface Otherwise only allow Traffic on tun0 (OpenVPN-Tunnel interface when established) Reject (or forward?) any traffic over other interfaces Currently I am using the following rules (sudo ufw status): To Action From -- ------ ---- 192.168.42.11 9999/tcp ALLOW Anywhere # allow web-interface Anywhere on tun0 ALLOW Anywhere # out only thru tun0 Anywhere ALLOW OUT Anywhere on tun0 # in only thru tun0 My problem is that the machine is initially not able to establish the OpenVPN-connection since only tun0 is allowed, which is not yet established (chicken-egg-problem) How do I allow creating the OpenVPN connection and from this point onward force every single packet to go thru the VPN-tunnel?

    Read the article

  • How to install Imagick on Mac OS with XAMPP

    - by Cat
    I'm having some trouble installing Imagick on a Mac. I have XAMPP installed and when I run 'sudo pecl install imagick', I get this error: checking ImageMagick MagickWand API configuration program... configure: error: not found. Please provide a path to MagickWand-config or Wand-config program. ERROR: `/private/tmp/pear/temp/imagick/configure --with-imagick="/Applications/XAMPP/xamppfiles/bin"' failed The error above shows up when I input /Applications/XAMPP/xamppfiles/bin after Please provide the prefix of Imagemagick installation [autodetect] Do I need to install some other library before running the Imagick install command?

    Read the article

  • How do I get write access to ubuntu files from Windows?

    - by Steven
    I'm running Ubuntu 11.10 on my Virtual Machine as a web server. I've mounted the W:/ drive in Win 7 to my /www folder in Ubuntu. I can read the files, but I'm not able to write to the files. In Samba, I have created the following user: <www-data> = "<www-data>" And given guest ok for the www folder: [www] comment = Ubuntu WWW area path = /var/www browsable = yes guest ok = yes read only = no create mask = 0755 ;directory mask = 0775 force user = www-data force group = www-data I've also run sudo chmod -R 755 www to make ensure correct rw access. What am I missing in order to get write access to my ubuntu files from Windows?

    Read the article

  • Deny users in a certain group access to dovecot

    - by celil
    I installed the dovecot-imapd package in Ubuntu, and my setup is as follows: $ sudo dovecot -n # 1.2.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-27-generic-pae i686 Ubuntu 10.04.1 LTS log_timestamp: %Y-%m-%d %H:%M:%S protocols: imaps login_dir: /var/run/dovecot/login login_executable: /usr/lib/dovecot/imap-login mail_privileged_group: mail mail_location: maildir:~/Maildir mbox_write_locks: fcntl dotlock auth default: passdb: driver: pam userdb: driver: passwd For security reasons I would like to deny all users that are in the admin group ability to do imap login via dovecot. This is done in order to prevent a brute force attacker from discovering the admin passwords, and obtaining administrator privileges on the system. How can this be achieved? Presumably, I will have to modify some settings in /etc/dovecot/dovecot.conf, but I am hesitant to change the default settings lest I create other security vulnerabilities.

    Read the article

  • Full HD video playback acceleration with mplayer on Ubuntu Lucid

    - by pts
    I know that for an NVidia card I can sudo apt-get install nvidia-current mplayer, reboot, and then use mplayer -vo vdpau -vc ffmpeg12vdpau,ffwmv3vdpau,ffvc1vdpau,ffh264vdpau FILE.mkv to get accelerated video playback of H.264 and other codecs, so even full HD videos can be played back with only little CPU. (And there are many other options, e.g. XBMC also supports VDPAU.) But how do I get accelerated video playback if I have a recent ATI or Intel video card on Ubuntu Lucid? How do I figure out if my video card has acceleration built in? The solution has to work with mplayer or mplayer2. It's OK for me to recompile mplayer(2), but I'd prefer installing both the kernel and the X.org X server from a binary package repository.

    Read the article

  • How to automatically show USB camera or memory stick contents in Icewm?

    - by darenw
    I normally use a very lightweight Linux setup. No desktop like Gnome or KDE, just Icewm as the windows manager and nothing else that normal users might consider essential. Well, I do need a file manager - I use Thunar. Recently I've been trying Gnome. Whenever I shove a memory stick into a USB port, or connect my digital camera, it can automatically pop up a file manager showing all the goodies on that device. KDE does this too. I like this. Although quick at the command line, I like not having to go sudo to mount the device and all that. If I want to stick with a lightweight setup using Icewm+Thunar, is there something non-huge I can install to make external devices fire up a Thunar window, or otherwise make access to the contents brainlessly easy?

    Read the article

  • Linux - use dhcp again to get IP

    - by Markus Orreilly
    I had statically set my ip in Linux using: sudo ifconfig eth0 192.168.blah.blah Now I want it to go back to using DHCP to assign the IP. How do I do that? This is what I see when I run dhclient Internet Systems Consortium DHCP Client V3.1.2 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ Listening on LPF/eth0/08:00:27:9b:43:09 Sending on LPF/eth0/08:00:27:9b:43:09 Sending on Socket/fallback DHCPREQUEST of 192.168.56.104 on eth0 to 255.255.255.255 port 67

    Read the article

  • sSMTP Configuration Question

    - by SevenCentral
    I've installed sSMTP on Ubuntu 10.04 via: sudo apt-get install ssmtp My configuration file is: # # Config file for sSMTP sendmail # # The person who gets all mail for userids < 1000 # Make this empty to disable rewriting. [email protected] # The place where the mail goes. The actual machine name is required no # MX records are consulted. Commonly mailhosts are named mail.domain.com mailhub=smtp.gmail.com:587 # Where will the mail seem to come from? #rewriteDomain= # The full hostname hostname=somedomain.com # Are users allowed to set their own From: address? # YES - Allow the user to specify their own From: address # NO - Use the system generated From: address #FromLineOverride=YES [email protected] authpass=**** usestarttls=yes Am I transmitting my credentials in clear text? Is calling ssmtp a secure operation? Thanks.

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >