Search Results

Search found 85 results on 4 pages for 'hdparm'.

Page 1/4 | 1 2 3 4  | Next Page >

  • Connect a ssd externally such that hdparm is fully supported

    - by student
    I am just trying some hdparm magic with my new ssd (samsung 840 pro). However I don't want to change my drive over and over so it would be great if I could connect it externally to my laptop. I have a cheap sata-usb Adapter, but I feel it doesn't support the ATA commands send by hdparm. So what's the best way to do this? Are there sata-usb Adapters which fully support the hdparm things? Would it be a good idea to buy a sata-esata adapter to get full control over the drive?

    Read the article

  • How do hdparm's -S and -B options interact?

    - by user697683
    These two options seem confusing. For example: according to the man page -B 254 "does not permit spin-down". However, testing with -B 254 -S 1 the drive does spin down after 5 seconds. -B Query/set Advanced Power Management feature, if the drive supports it. A low value means aggressive power management and a high value means better performance. Possible settings range from values 1 through 127 (which permit spin-down), and values 128 through 254 (which do not permit spin-down). The highest degree of power management is attained with a setting of 1, and the highest I/O performance with a setting of 254. A value of 255 tells hdparm to disable Advanced Power Management altogether on the drive (not all drives support disabling it, but most do). -S Put the drive into idle (low-power) mode, and also set the standby (spindown) timeout for the drive. This timeout value is used by the drive to determine how long to wait (with no disk activity) before turning off the spindle motor to save power. Under such circumstances, the drive may take as long as 30 seconds to respond to a subsequent disk access, though most drives are much quicker. The encoding of the timeout value is somewhat peculiar. A value of zero means "timeouts are disabled": the device will not automatically enter standby mode. Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes. A value of 253 sets a vendor-defined timeout period between 8 and 12 hours, and the value 254 is reserved. 255 is interpreted as 21 minutes plus 15 seconds. Note that some older drives may have very different interpretations of these values.

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • Using hdparm for better performance on Web Servers

    - by Rishav
    I just heard about using hdparams to optimize the Hard Disk Performance of a server ? Is this common practice ? What file systems do you use ? I generally deploy on the second last release of Ubuntu for stability reasons, do you some other filesystems or use distributed file systems from the get go ? Do the hdparam settings change for different File systems ? I haven't tried this yet, so how much difference do changes like this make ?

    Read the article

  • hdparm - how to secure erase SATA SSD over USB

    - by cc0
    I have been following this guide on how to secure erase an SSD (trying to improve the performance of mine, which currently only writes at about 30mb/s seq). However, I'm using an USB--Sata docking device to avoid having the harddrive frozen. Apparently using this solution the SATA device is recognized as a SCSI drive, which is giving me trouble. I use the "hdparm -I /dev/sda" command with those parameters, and I get the error; HDIO_DRIVE_CMD (identify) failed: Invalid Exchange After a lot of googling on the issue I can't seem to find anyone who has actually solved this problem. However, I have not tried to just go ahead and use the secure erase. So I'm not sure if this would actually still work. I would love any and all input I can get on this, especially on whether it will still work to do a secure erase with the drive being recognized as a SCSI drive. The drive itself is a Samsung 256gb SSD (pm800), I'm sure you can understand my reluctance to go through this procedure without feeling reasonably safe that I won't mess it up beyond repair.

    Read the article

  • External drive hanging, load average through the roof

    - by Paul Tomblin
    I have an external USB drive, and I run an hourly rsync to it as a backup. This has been working fine for years. This weekend, I got two new 2Tb internal drives, and decided it was time to re-install Ubuntu from scratch to clear out all the old cruft. About once a day since the re-install, the backup script hangs hard, usually in the "rm -rf" I do before the rsync. By the time I notice the problem, my load average is in the stratosphere and climbing fast (one time, it was over 150), but anything that doesn't touch the drive seems to be running fine. One thing that I find suspicious is that something, I don't know what, is doing a "smartctl" and a "hdparm" command on the USB drive. I'm pretty sure smartctl isn't supposed to run on external drives. I can't figure out what's doing it, either. Here's part of ps auwwfx when it's hung: root 7310 0.0 0.0 4248 352 ? D 20:15 0:00 /sbin/hdparm -C /dev/sdd root 7808 0.0 0.0 17372 1632 ? D 20:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 8427 0.0 0.0 4248 356 ? D 20:20 0:00 /sbin/hdparm -C /dev/sdd root 8925 0.0 0.0 17372 1628 ? D 20:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 9529 0.0 0.0 4248 356 ? D 20:25 0:00 /sbin/hdparm -C /dev/sdd root 10026 0.0 0.0 17372 1628 ? D 20:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 10655 0.0 0.0 4248 356 ? D 20:30 0:00 /sbin/hdparm -C /dev/sdd root 11151 0.0 0.0 17372 1632 ? D 20:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 11774 0.0 0.0 4248 356 ? D 20:35 0:00 /sbin/hdparm -C /dev/sdd root 12271 0.0 0.0 17372 1628 ? D 20:35 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 12878 0.0 0.0 4248 352 ? D 20:40 0:00 /sbin/hdparm -C /dev/sdd root 13374 0.0 0.0 17372 1632 ? D 20:40 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 14011 0.0 0.0 4248 352 ? D 20:45 0:00 /sbin/hdparm -C /dev/sdd root 14507 0.0 0.0 17372 1628 ? D 20:45 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 15116 0.0 0.0 4248 352 ? D 20:50 0:00 /sbin/hdparm -C /dev/sdd root 15612 0.0 0.0 17372 1632 ? D 20:50 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 16223 0.0 0.0 4248 352 ? D 20:55 0:00 /sbin/hdparm -C /dev/sdd root 16734 0.0 0.0 17372 1632 ? D 20:55 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 17345 0.0 0.0 4248 352 ? D 21:00 0:00 /sbin/hdparm -C /dev/sdd root 17842 0.0 0.0 17372 1628 ? D 21:00 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 18463 0.0 0.0 4248 352 ? D 21:05 0:00 /sbin/hdparm -C /dev/sdd root 18960 0.0 0.0 17372 1628 ? D 21:05 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 19598 0.0 0.0 4248 356 ? D 21:10 0:00 /sbin/hdparm -C /dev/sdd root 20096 0.0 0.0 17372 1628 ? D 21:10 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 21280 0.0 0.0 4244 356 ? D 21:15 0:00 /sbin/hdparm -C /dev/sdd root 21784 0.0 0.0 17372 1632 ? D 21:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 22414 0.0 0.0 4244 356 ? D 21:20 0:00 /sbin/hdparm -C /dev/sdd root 22912 0.0 0.0 17372 1628 ? D 21:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 23541 0.0 0.0 4244 356 ? D 21:25 0:00 /sbin/hdparm -C /dev/sdd root 24038 0.0 0.0 17372 1632 ? D 21:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 24658 0.0 0.0 4244 356 ? D 21:30 0:00 /sbin/hdparm -C /dev/sdd root 25157 0.0 0.0 17372 1628 ? D 21:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd Why is this happening, and how can I stop it?

    Read the article

  • Is wiper.sh working?

    - by Aleksander Blomskøld
    I'm setting up a server running Ubuntu Precise, and I'm trying to verify if SSD TRIM is working. fstrim is failing: ~ sudo fstrim -v / fstrim: /: FITRIM ioctl failed: Operation not supported So I tried wiper.sh in hdparm: wiper-3.5 sudo ./wiper.sh --verbose --commit /dev/sda1 wiper.sh: Linux SATA SSD TRIM utility, version 3.5, by Mark Lord. rootdev=/dev/sda1 fsmode2: fsmode=read-write /: fstype=ext4 freesize = 169502088 KB, reserved = 1695020 KB Preparing for online TRIM of free space on /dev/sda1 (ext4 mounted read-write at /). This operation could silently destroy your data. Are you sure (y/N)? y Creating temporary file (167807068 KB).. Syncing disks.. Beginning TRIM operations.. get_trimlist=/sbin/hdparm --fibmap WIPER_TMPFILE.11503 /dev/sda: trimming 3211263 sectors from 64 ranges succeeded trimming 3571713 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded (...) trimming 3657913 sectors from 60 ranges succeeded Removing temporary file.. Syncing disks.. Done. It seems to be working, but I'm wondering if it really is. Are there any cases where wiper.sh should work when fstrim isn't? Is there any way I can check if the TRIMing actually has succeeded (other than trusting the wiper.sh-log)?

    Read the article

  • How can I unfreeze an SSD connected to a remote server?

    - by chmac
    I don't have physical access to the machine, so I can't unplug the drive. # hdparm -I /dev/sda | grep frozen frozen The advice I've read elsewhere is to hotplug the drive, pull the power / sata cables while the machine is running. Those are not possible in this situation as I don't have physical access. I've tried power cycling the machine through the host's control panel a few times, but that hasn't worked. Is there any way I can unfreeze (unfrozen?) the drive without physical access?

    Read the article

  • How to erase a SSD to restore factory performance in Linux?

    - by Andy B
    Due to big performance issues with an mdraid-1 array I'd like to pull down from the array one of the devices (Samsung 840 Pro), erase it to restore factory performance and re-add it to the array. The reason I want to do this to one of the SSDs is because the poor performance seems to be related to one specific SSD out of the two (although they are the same brand, model and firmware ver). But how do I erase a SSD from Linux? I mention that hdparm indicates that both drives are frozen at this time. Maybe because they are part of an md array? Thanks in advance!

    Read the article

  • Disk is spinning down each minute, unable to disable it

    - by lzap
    I played with spindown and APM settings of my Samsung discs and now they spin down every minute. I want to disable it, but it seems it does not accept any of the spindown time or APM values. Nothing works, it's all the same. Please help what values should be proper for it. I do not want it to spin down at all. /dev/sda: ATA device, with non-removable media Model Number: SAMSUNG HD154UI Serial Number: S1Y6J1KZ206527 Firmware Revision: 1AG01118 Standards: Used: ATA-8-ACS revision 3b Supported: 7 6 5 4 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 2930277168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 1430799 MBytes device size with M = 1000*1000: 1500301 MBytes (1500 GB) cache/buffer size = unknown Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 udma7 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension Automatic Acoustic Management feature set * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test Media Card Pass-Through * General Purpose Logging feature set * 64-bit World wide name * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information DMA Setup Auto-Activate optimization Device-initiated interface power management * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Long Sector Access (AC1) * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count supported: enhanced erase 326min for SECURITY ERASE UNIT. 326min for ENHANCED SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 50024e900300cca3 NAA : 5 IEEE OUI : 0024e9 Unique ID : 00300cca3 Checksum: correct I have the very same disc which I did not "tuned" and it does not spin. But I do not know where to read the settings from. The hdparm only shows this: Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 Edit: It seems the issue was tuned daemon in RHEL6. It was too aggressive, I turned off disc tuning and it seems they are no longer spinning down.

    Read the article

  • execute script with sudo after login

    - by Kalamalka Kid
    i need to execute the following commands AFTER login. sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 (trying to edit rc.local does not work nor does using hdparm.conf because as soon as I log in the disks start up again). I have tried numerous things like bash files and autossh entries in the startup applications with no luck because sudo is involved.

    Read the article

  • How to change harddrive spindown time?

    - by norpol
    if I run my notebook on battery mode, ubuntu spindown my hardrive every few seconds. How can I fix that? I have tried it with sudo hdparm -S 127 /dev/sda* but I am not able to disable the spindown and can´t find the reason for this "bug". Running Ubuntu 12.04 - Bug is since 12.04 Recent updates are installed, the problem is still recent. I can change the settings via. sudo hdparm -B 127 but if I change hdparm.conf, Ubuntu does not recognize it. /dev/sda { apm_battery=127 }

    Read the article

  • My linux server takes more than an hour to boot. Suggestions?

    - by jamieb
    I am building a CentOS 5.4 system that boots off a compact flash card using a card reader that emulates an IDE drive. It literally takes about an hour to boot. The ultra-slow part occurs when Grub is loading the kernel. Once that's done, the rest of the boot process only takes about a minute to get to a login prompt. Does anyone have any suggestions? I suspect that it may have to do with UDMA. Everything IDE-related in my BIOS seems to checkout. The read performance hdparm is telling me 1.77 MB/s. Ouch! (But even at that rate, it still shouldn't take an hour to decompress and load the kernel) [root@server ~]# hdparm -tT /dev/hdc /dev/hdc: Timing cached reads: 2444 MB in 2.00 seconds = 1222.04 MB/sec Timing buffered disk reads: 6 MB in 3.39 seconds = 1.77 MB/sec Trying to enable DMA is a no-go though: [root@server ~]# hdparm -d1 /dev/hdc /dev/hdc: setting using_dma to 1 (on) HDIO_SET_DMA failed: Operation not permitted using_dma = 0 (off) Here's some command outputs that might help: System [root@server ~]# uname -a Linux server.localdomain 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:47:32 EDT 2009 i686 i686 i386 GNU/Linux PCI info: [root@server ~]# lspci -v 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) Subsystem: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation 82945G/GZ Integrated Graphics Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at fdf00000 (32-bit, non-prefetchable) [size=512K] I/O ports at ff00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fdf80000 (32-bit, non-prefetchable) [size=256K] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 2 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 Flags: bus master, medium devsel, latency 0, IRQ 16 I/O ports at fe00 [size=32] 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 Flags: bus master, medium devsel, latency 0, IRQ 17 I/O ports at fd00 [size=32] 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 Flags: bus master, medium devsel, latency 0, IRQ 18 I/O ports at fc00 [size=32] 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at fb00 [size=32] 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fdfff000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fdd00000 Capabilities: [50] #0d [0000] 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) (prog-if 80 [Master]) Subsystem: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 17 I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at f800 [size=16] Capabilities: [70] Power Management version 2 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) Subsystem: Intel Corporation 82801G (ICH7 Family) SMBus Controller Flags: medium devsel, IRQ 17 I/O ports at 0500 [size=32] 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 18 I/O ports at de00 [size=256] Memory at fdeff000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 17 I/O ports at dc00 [size=256] Memory at fdefe000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 19 I/O ports at da00 [size=256] Memory at fdefd000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 hdparm ouput: [root@server ~]# hdparm /dev/hdc /dev/hdc: multcount = 0 (off) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 8146/16/63, sectors = 8211168, start = 0 [root@server ~]# hdparm -I /dev/hdc /dev/hdc: ATA device, with non-removable media Model Number: InnoDisk Corp. - iCF4000 4GB Serial Number: 20091023AACA70000753 Firmware Revision: 081107 Standards: Supported: 5 Likely used: 6 Configuration: Logical max current cylinders 8146 8146 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 8211168 LBA user addressable sectors: 8211168 device size with M = 1024*1024: 4009 MBytes device size with M = 1000*1000: 4204 MBytes (4 GB) Capabilities: LBA, IORDY(can be disabled) Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 2 Current = 2 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * Power Management feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * CFA feature set * Mandatory FLUSH_CACHE HW reset results: CBLID- above Vih Device num = 0 CFA power mode 1: enabled and required by some commands Maximum current = 100ma Checksum: correct

    Read the article

  • Software that supports ATA Secure Erase Command

    - by vy32
    We have a lot of drives that need to be sanitized. NIST SP 800-88 recommends software that uses the ATA Secure Erase command. That's apparently the only way to be sure that the drive is properly wiped, due to bad-block remapping and such. I know that this functionality is available in hdparm. The problem with that approach is that it is inconsistent on multiple platforms, occasionally times out, doesn't have error-checking logic, and doesn't check the resulting drive to make sure that it has, in fact, been erased. So a proper program might use hdparm, but hdparm by itself isn't an answer. I'm looking for open source software that implements ATA Secure Erase. Ideally it will be a bootable disk image like DBAN, but it will use the ATA command.

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • slow software raid

    - by Jure1873
    I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1

    Read the article

  • Why is my HDD going back from standy?

    - by Pablo
    My hard drives, connected to Ubuntu server are producing the following log every exactly 5 minutes. Nov 1 14:10:50 localhost kernel: [ 1602.884936] ata2.00: hard resetting link Nov 1 14:10:51 localhost kernel: [ 1603.226804] ata2.01: hard resetting link Nov 1 14:10:52 localhost kernel: [ 1604.274533] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.274548] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.356669] ata2.00: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375247] ata2.01: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375265] ata2: EH complete I don't think this is related to hard drive failure, because it happens for ALL hard drives connected and ONLY when I write spindown_time = 12 in /etc/hdparm.conf. The reason I add this value is to put disks into standby mode after 60 seconds, which is happening after that period (checked with hdparm -C). The first clue I thought that smartd was running and spinning the drive. However, I couldn't find it in ps -aux | grep smart. Additionally, iostat does show that nobody accessed those drives, since Blk_read, Blk_wrtn remain unchanged. I also killed all processes that may be doing something with hdd(eg SAMBA). So I guess the problem is solely with hdparm... I have no more clue where that 5 minute value hides.

    Read the article

  • Unmounted disk still spins up regularly

    - by Erik Johansson
    I just added a disk, with partitions but none of them are mounted. The disk will still spin up every now and then. it goes like this: ### disk spins up hdparm -Y /dev/sdb;date /dev/sdb: issuing sleep command 9 feb 2011 23.37.08 CET ### disk spins up hdparm -Y /dev/sdb;date /dev/sdb: issuing sleep command 9 feb 2011 23.46.12 CET Also it always spins up when I shut down the computer. Any tips are welcome, e.g. how can I figure out which process is accessing the disk, are there any daemons doing this? I know it isn't a cron job.

    Read the article

  • Optimize bootup sequence

    - by ubuntudroid
    I'm on Ubuntu 11.04 (upgraded from 10.10) and suffering really high bootup times. It got so annoying, that I decided to dive into bootchart analysis myself. Therefore I installed bootchart and restarted the system which generated this chart. However, I'm not really experienced in reading such stuff. What causes the long bootup sequence? Edit: Here is the output of hdparm -i /dev/sda: /dev/sda: Model=SAMSUNG HD501LJ, FwRev=CR100-12, SerialNo=S0MUJ1EQ102621 Config={ Fixed } RawCHS=16383/16/63, TrkSize=34902, SectSize=554, ECCbytes=4 BuffType=DualPortCache, BuffSize=16384kB, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=976773168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-3,4,5,6,7 * signifies the current active mode And here the output of hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2410 MB in 2.00 seconds = 1205.26 MB/sec Timing buffered disk reads: 258 MB in 3.02 seconds = 85.50 MB/sec

    Read the article

  • Copy from CDROM is very slow in Ubuntu

    - by ???
    I'm using the command to copy CDROM image: # dd if=/dev/sr0 of=./maverick.iso But it's very slow, at about 350k bytes/sec. I've searched the google, and try the command # hdparm -vi /dev/sr0 /dev/sr0: HDIO_DRIVE_CMD(identify) failed: Bad address IO_support = 1 (32-bit) readonly = 0 (off) readahead = 256 (on) HDIO_GETGEO failed: Inappropriate ioctl for device Model=DVD-ROM UJDA775, FwRev=DA03, SerialNo= Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic } RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=0 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0 IORDY=yes, tPIO={min:180,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: sdma0 sdma1 sdma2 mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 *udma2 AdvancedPM=no Drive conforms to: ATA/ATAPI-5 T13 1321D revision 3: ATA/ATAPI-1,2,3,4,5 * signifies the current active mode Seems like DMA is already on. And a device test gives: # hdparm -t /dev/sr0 /dev/sr0: Timing buffered disk reads: 2 MB in 6.58 seconds = 311.10 kB/sec # sudo hdparm -tT /dev/sr0 /dev/sr0: Timing cached reads: 2 MB in 2.69 seconds = 760.96 kB/sec Timing buffered disk reads: m 4 MB in 5.19 seconds = 789.09 kB/sec The CD-ROM device and disc should be okay because I can copy it very fast in Windows, using UltraISO utility. So I guess there is something not configured right in Ubuntu, is it?

    Read the article

  • Linux Read-Ahead Downsides

    - by JPerkSter
    Hi Everyone, Hope all is well. I have a question regarding read-ahead caching. Are there any downsides to raising the size of the read-ahead cache? On our farm, we're currently running at 256, and upon raising that higher, we are seeing significant throughput gains.   [root@server~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 7352 MB in 2.00 seconds = 3677.62 MB/sec 3 Timing buffered disk reads: 244 MB in 3.10 seconds = 78.68 MB/sec [root@server ~]# blockdev --setra 10240 /dev/sda [root@server ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 11452 MB in 2.00 seconds = 5728.52 MB/sec Timing buffered disk reads: 422 MB in 3.17 seconds = 133.04 MB/sec We are running on 2.6. Thanks!

    Read the article

  • What is the recommended way to empty a SSD?

    - by Lekensteyn
    I've just received my new SSD since the old one died. This Intel 320 SSD supports TRIM. For testing purposes, my dealer put malware, err, Windows on it. I want to get rid of it and install Kubuntu on it. It does not have to be a "secure wipe", I just need the empty the disk in the mosy healthy way. I believe that dd if=/dev/zero of=/dev/sda just fills the blocks with zeroes and thereby taking another write (correct me if I'm wrong). I've seen the answer How to enable TRIM, but it looks like it's suited for clearing empty blocks, not wiping the disk. hdparm seems to be the program to do it, but I'm not sure if it clears the disk OR cleans empty blocks. From its manual page: --trim-sector-ranges For Solid State Drives (SSDs). EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! Tells the drive firmware to discard unneeded data sectors, destroying any data that may have been present within them. This makes those sectors available for immediate use by the firmware's garbage collection mechanism, to improve scheduling for wear-leveling of the flash media. This option expects one or more sector range pairs immediately after the option: an LBA starting address, a colon, and a sector count, with no intervening spaces. EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! E.g. hdparm --trim-sector-ranges 1000:4 7894:16 /dev/sdz How can I make all blocks appear as empty using TRIM?

    Read the article

  • Hard drive clicking noise on Acer AO722

    - by Blank
    I'm running Ubuntu 11.10 on an Acer Aspire One 722. Whenever I'm on battery power I get a clicking sound from my hard drive every 5 seconds or so (this does not happen when the laptop is plugged in). I'm dual booting with Windows 7 and I don't get the clicking sound in Windows. The clicking sound stops when I run the command:sudo hdparm -B 254 /dev/sda Also, according to:sudo smartctl -H /dev/sda my hard drive is healthy. Is this clicking sound something I can just ignore? Or is it a serious problem and will it eventually damage my computer? If so, how would I fix it? I have tried adding hdparm -B 254 /dev/sda to my /etc/rc.local file, but I still run into the clicking problem if my computer boots while plugged in and is then unplugged. Also, I'm finding this fix to be unreliable. Sometimes it works, sometimes it does not. Is this a good solution and is there a better way of doing this? Also, would running my laptop with a -B value of 254 have any negative effects? (I read somewhere about a lower level protecting the hard drive from bumps)

    Read the article

  • 2 drives, slow software RAID1 (md)

    - by bart613
    Hello, I've got a server from hetzner.de (EQ4) with 2* SAMSUNG HD753LJ drives (750G 32MB cache). OS is CentOS 5 (x86_64). Drives are combined together into two RAID1 partitions: /dev/md0 which is 512MB big and has only /boot partitions /dev/md1 which is over 700GB big and is one big LVM which hosts other partitions Now, I've been running some benchmarks and it seems like even though exactly the same drives, speed differs a bit on each of them. # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 25612 MB in 1.99 seconds = 12860.70 MB/sec Timing buffered disk reads: 352 MB in 3.01 seconds = 116.80 MB/sec # hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 25524 MB in 1.99 seconds = 12815.99 MB/sec Timing buffered disk reads: 342 MB in 3.01 seconds = 113.64 MB/sec Also, when I run eg. pgbench which is stressing IO quite heavily, I can see following from iostat output: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sdb 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 md1 0.00 0.00 0.00 529.60 0.00 9692.80 18.30 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 529.00 0.00 9688.00 18.31 24.51 49.91 1.81 95.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sdb 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 md1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 30.19 56.92 2.05 99.04 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sdb 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 md1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 28.77 62.15 1.96 99.00 And this is completely getting me confused. How come two exactly the same specced drives have such a difference in write speed (see util%)? I haven't really paid attention to those speeds before, so perhaps that something normal -- if someone could confirm I would be really grateful. Otherwise, if someone have seen such behavior again or knows what is causing such behavior I would really appreciate answer. I'll also add that both "smartctl -a" and "hdparm -I" output are exactly the same and are not indicating any hardware problems. The slower drive was changed already two times (to new ones). Also I asked to change the drives with places, and then sda were slower and sdb quicker (so the slow one was the same drive). SATA cables were changed two times already.

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

1 2 3 4  | Next Page >