Search Results

Search found 242 results on 10 pages for 'mdadm'.

Page 7/10 | < Previous Page | 3 4 5 6 7 8 9 10  | Next Page >

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

  • Can I use dmraid instead of md (mdadm) to make software RAID-1 and RAID-1+0 volumes?

    - by Don MacAskill
    On a related question about SSDs and TRIM (see: Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux? ), it turns out that dmraid may now (or shortly) support TRIM on RAID-1. Typically, we've used md (via mdadm) to create our RAID-1 volumes, then used LVM to create volume groups, then formatted with the file system of our choice (ext4 lately). We've been doing this for years, and Google & ServerFault searches seem to confirm this is the most common way of doing software RAID with volume management. Google searches seem to suggest that dmraid is use for so-called 'fakeRAID' configurations where there's some level of hardware 'help' in the form of RAID BIOS in the controller, which we don't have (and don't want to use - we'd like a fully software solution). Since we'd like to use TRIM on our SSDs, and since md doesn't seem to (yet?) support TRIM, I'm wondering if it's possible to use dmraid instead of md to create RAID-1 (and RAID-1+0) volumes in software, with no hardware support (ie, just plugged into a dumb SATA/SAS bus)?

    Read the article

  • does the *physical* order/location of drives in a mdadm-managed RAID-10 array matter?

    - by locuse
    i've setup a 4-drive RAID-10 array using mdadm-managed, software-raid on an x86_64 box. it'd up & running and works as expected, cat /proc/mdstat md127 : active raid10 sdc2[2] sdd2[3] sda2[0] sdb2[1] 1951397888 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU] bitmap: 9/466 pages [36KB], 2048KB chunk atm the four SATA drives are physically plugged into the motherboard's 1st four SATA ports. i'd like to gather the necessary/complete info for catastrophic recovery. reading starting here, http://neil.brown.name/blog, and the mailing list, i'm not yet completely confident i have it right. i understand 'drive order matters'. is that logical, &/or physical order that matters? if i unplugged the four drives in this array, and plugged them each back into different ports on the motherboard or a pci card, as long as i've changed nothing in software config, will the array correctly auto-re-assemble?

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • Creating properly aligned partitions on a replacement disk

    - by Marius Gedminas
    I've a typical small office server with two hard disks configured for RAID-1 (mirroring). Each disk has several partitions: one for swap, the others paired in several /dev/mdX arrays. Every couple of years one of the disks dies and is replaced. The replacement typically goes something like this: # copy partition table from the remaining good disk to the empty replacement disk # (instead of /dev/good_disk and /dev/new_disk I use /dev/sda and /dev/sdb, as appropriate) sfdisk -d /dev/good_disk | sfdisk /dev/new_disk # install boot loader grub-install /dev/new_disk # create swap partition reusing the same UUID, so I don't need to edit /etc/fstab mkswap /dev/new_disk1 -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # hot-add the new partitions to my RAID arrays mdadm /dev/md0 -a /dev/new_disk2 mdadm /dev/md1 -a /dev/new_disk5 mdadm /dev/md2 -a /dev/new_disk6 mdadm /dev/md3 -a /dev/new_disk7 mdadm /dev/md4 -a /dev/new_disk8 The disks were originally partitioned with cfdisk back in 2009, and so the partition table is aligned traditionally to cylinder boundaries (255 heads * 63 sectors). This is not the optimum configuration for new 4K-sector drives. My question is: how can I create a set of partitions for the new disk and ensure they're properly aligned, and have correct sizes for my RAID arrays (rounding up is acceptable, I suppose, but rounding down is definitely not)?

    Read the article

  • How to access a fake raid?

    - by maaartinus
    I have a fake raid, which I wanted to access using mdadm /dev/md0 -A -c 128 -l stripe --verbose /dev/sda /dev/sdc which should be right, as far as I understand the man page. But I get the message mdadm: option -l not valid in assemble mode leaving the offending option out leads to mdadm: failed to create /dev/md0 and (despite verbose) no more information. I'm assuming that -A requires some mdadm-specific header which is obviously missing. I probably need to use "build" instead of assemble, but from the description I'm really unsure whether this is a non-destructive operation. Is it? What should I exactly do? UPDATE I see I haven't made clear, that the array already exists as a fake-raid (I can't give the details about my mainboard now). It looks like doing nothing except for interleaving blocks, so I hoped it could be easily done using mdadm, too. Maybe I'm completely wrong, but all the info I've found was concerned with booting from fake-raid, what I don't really need. I'd be happy with a read access for now.

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Reshape linux md raid5 that is already being reshaped?

    - by smammy
    I just converted my RAID1 array to a RAID5 array and added a third disk to it. I'd like to add a fourth disk without waiting fourteen hours for the first reshape to complete. I just did this: mdadm /dev/md0 --add /dev/sdf1 mdadm --grow /dev/md0 --raid-devices=3 --backup-file=/root/md0_n3.bak The entry in /proc/mdstat looks like this: md0 : active raid5 sdf1[2] sda1[0] sdb1[1] 976759936 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU] [>....................] reshape = 1.8% (18162944/976759936) finish=834.3min speed=19132K/sec Now I'd like to do this: mdadm /dev/md0 --add /dev/sdd1 mdadm --grow /dev/md0 --raid-devices=4 --backup-file=/root/md8_n4.bak Is this safe, or do I have to wait for the first reshape operation to complete? P.S.: I know I should have added both disks first, and then reshaped from 2 to 4 devices, but it's a little late for that.

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • Booting Debian5 (Lenny) on 2.6.16 Kernel

    - by bk
    Due to a proprietary kernel module that I don't have the source to and is very picky about what kernel versions it will load into (even with modprobe --f), I find myself in need of running a 2.6.16.XX kernel on my Debian5 machine. The machine boots fine with the 2.6.26-2 stock kernel, and I have successfully build and booted 2.6.26 and 2.6.31 based kernels by making a .deb and the ndoing dpkg -i. However, when I do the same approach for 2.6.16, the kernel hangs at boot. I'm testing this in a VMWare image, so I don't think its an issue of newer hardware not supported by the older kernel. For a working kernel, at boot I get: Uncompressing Linux.. OK booting the kernel Loading, please wait... mdadm: No devices listed in the conf file were found kinit name_to_dev_t /dev/hda5 (dev5,3) ... With 2.6.16.60, I never get the kinit message. It hangs after the mdadm line. There are no mdadm arrays on this machine, so I doubt its an issue inside the mdadm stuff, which is supposed to just error out as it does in the 2.6.26 case above, but for some reason I'm getting stuck getting into kinit. I've been banging my head against this wall so I'm very open to suggestions on how to go about troubleshooting this.

    Read the article

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • Resize2fs at 81h and counting

    - by Adam
    Setup: 12x 1TB drives in a RAID6 (MDADM) crypt-setup running ontop of MDADM LVM running on the crypted drives EXT4 on the LVM Background: I added a new drive to the RAID (increasing from 11 to 12 drives), and 'bubbled' up through the layers (MDADM, etc...) to reizing the ext4 partition. This machine is used as a centralized repository for photography and as a backup server (for both Windows and Mac machines) so bringing it down to add the drive and wait for the resizing and everything wasn't really an option. So I started the resize operation several days ago. HTOP is reporting the resize2fs operation as running for 81h now. DMESG and syslog are both clear, and the drives are still accessable. The resize command reports it's started an online resize of the partition, so the process IS running, and it is burning through 100% of one of my cores. Question: Is it normal for the operation to take this long or has something gone horribly wrong? Where would I start looking for signs of trouble?

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Can't re-mount existing RAID10 on Ubuntu

    - by Zoran
    I saw similar questions, but didn't find what solution to my problem. After power-cut, one of RAID10 (4 disks were) appears to be malfunctioning. I make tha array active one, but can not mount it. Always the same error: mount: you must specify the filesystem type So, here is what I have when type mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Tue Sep 1 11:00:40 2009 Raid Level : raid10 Array Size : 1465148928 (1397.27 GiB 1500.31 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jun 11 09:54:27 2012 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : 1a02e789:c34377a1:2e29483d:f114274d Events : 0.166 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde At the /etc/mdadm/mdadm.conf I have by default, scan all partitions (/proc/partitions) for MD superblocks. alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes automatically tag new arrays as belonging to the local system HOMEHOST <system> instruct the monitoring daemon where to send mail alerts MAILADDR root definitions of existing MD arrays ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d This file was auto-generated... So, my question is, how can I mount md0 array (md1 has been mounted without problem) in order to preserve existing data? One more thing, fdisk -l command gives the following result: Disk /dev/sdb: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/sdb1 * 1 88217 708603021 83 Linux /dev/sdb2 88218 91201 23968980 5 Extended /dev/sdb5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdc: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0008f8ae Device Boot Start End Blocks Id System /dev/sdc1 1 88217 708603021 83 Linux /dev/sdc2 88218 91201 23968980 5 Extended /dev/sdc5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdd: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x4be1abdb Device Boot Start End Blocks Id System Disk /dev/sde: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa4d5632e Device Boot Start End Blocks Id System Disk /dev/sdf: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/sdg: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/md1: 750.1 GB, 750156251136 bytes 2 heads, 4 sectors/track, 183143616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite) Disk /dev/md0: 1500.3 GB, 1500312502272 bytes 255 heads, 63 sectors/track, 182402 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/md0p1 * 1 88217 708603021 83 Linux /dev/md0p2 88218 91201 23968980 5 Extended /dev/md0p5 ? 121767 155317 269488144 20 Unknown And one more thing. When using mdadm --examine command, here ise result: mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d devices=/dev/sdf ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde md0 has 3 devices which are active. Can someone instruct me how to solve this issue? If it is possible, I would like not to removing faulty HDD. Please advise

    Read the article

  • Linux boot on a raid1 software raid ?

    - by azera
    Hello I am trying to convert my single disk boot to a raid1 boot So far here is what i have: I sucessfully create the raid 1 as degraded with the new drive alone, I copied all the data on it I can mount that raid 1, see its files etc I already have a raid5 that is working on the same box (although not booting on it) I have installed grub on both drive When grub boot, it loads the kernel alright, but during the kernel boot it fails to load the "root block device" The kernel tells me : 1 - detected that root device is an md device 2 - determining root devices 3 - mounting root 4 - mounting /dev/md125 on /newroot failed: input/output error. Please enter another root device: ... At this point, if I enter /dev/sda3 (my "old" root device that isn't converted to raid yet) everything boots fine without the root. The /dev/md125 device is indeed created but it seems to be created after the error happens, as in it creates it after loading the device, when mdadm is loaded. Somehow it looks like it can't/doesn't load the raid array before it needs to mount it, and I don't know how I can solve that. My config files (taken from the system once it boots with sda3 as root device): $ cat /etc/mdadm.conf ARRAY /dev/md/md0-r5 metadata=0.90 UUID=1a118934:c831bdb3:64188b84:66721085 ARRAY /dev/md125 metadata=0.90 UUID=48ec4190:a80d4dde:64188b84:66721085 $ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] md125 : active raid1 sdc3[1] 477853312 blocks [2/1] [_U] md127 : active raid5 sdd[0] sdf[3] sdb[2] sde[1] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> $ cat /boot/grub/menu.lst default 0 timeout 8 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title Gentoo Linux 2.6.31-r10 root (hd0,0) #kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/ram0 real_root=/dev/sda3 kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/md125 md=125,/dev/sdc3,/dev/sda3 initrd /boot/initramfs-genkernel-x86_64-2.6.31-gentoo-r10 # blkid /dev/sda1: UUID="89fee223-b845-4e0a-8a0b-e6cf695d5bcf" TYPE="ext2" /dev/sda2: UUID="a72296a8-d7d4-447f-a34b-ee920fd1a767" TYPE="swap" /dev/sda3: UUID="97eb0a6a-c385-4a9d-bf74-c0bab1fa4dc1" TYPE="ext3" /dev/sdb: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc1: UUID="d36537fd-19a0-b8a3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdd: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sde: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/md127: UUID="13a41589-4cf1-4c04-91ca-37484182c783" TYPE="ext4" /dev/sdf: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc2: UUID="a1916397-1b48-45d7-9f98-73aa521e882f" TYPE="swap" /dev/sdc3: UUID="48ec4190-a80d-4dde-6418-8b8466721085" TYPE="linux_raid_member" /dev/md125: UUID="c947ed64-1d4d-4d1d-b4d2-24669fff916e" SEC_TYPE="ext2" TYPE="ext3" # mdadm -E mdadm: No devices to examine # fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 83 Linux /dev/sda2 6 1311 10490445 82 Linux swap / Solaris /dev/sda3 1312 60801 477853425 83 Linux Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sdc1 1 5 40131 83 Linux /dev/sdc2 6 1311 10490445 82 Linux swap / Solaris /dev/sdc3 1312 60801 477853425 83 Linux Disk /dev/md125: 489.3 GB, 489321791488 bytes 2 heads, 4 sectors/track, 119463328 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md125 doesn't contain a valid partition table

    Read the article

  • Problems migrating software RAID 5 to new server (linux)

    - by leleu
    I have a CentOS setup with sw RAID5 that holds my data. Well, the server died, so I bought another box to migrate my drives to. Only thing is, I cannot get the RAID array rebuilt (not even sure it needs rebuilding, might just need the /dev/md0 mapping created... but I don't even know how to determine what I need!) Some details: RAID5 software (used mdadm) 4x 250GB drives (2 are SATA, 2 are EIDE -- would this matter? It worked fine in the other box...) latest CentOS distro built using mdadm I've got a decent amount of experience with standard linux stuff, but the hardware level stuff runs me in circles. I've spent some time googling and elsewhere here on SF, so please be kind for my newbie questions :). My question is this: how can I diagnose the problem? For all I know, I'm using the wrong device blocks when I try to rebuild the array, but I can't find the command to display only the devices that have some physical attachment. Is there some simple way for me to run mdadm, having it scan over all my physical drives, and say "hey, drives 2,5,6,7 are a software array, want me to mount it?" I basically just took the drives from my old box and put it into my new one. They show up in the BIOS. What steps do I need to take in order to get the array up, running, and mounted? Thanks in advance!

    Read the article

  • How to re-add a RAID-10 failed drive on Ubuntu?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • RAID degraded on Ubuntu server

    - by reano
    We're having a very weird issue at work. Our Ubuntu server has 6 drives, set up with RAID1 as follows: /dev/md0, consisting of: /dev/sda1 /dev/sdb1 /dev/md1, consisting of: /dev/sda2 /dev/sdb2 /dev/md2, consisting of: /dev/sda3 /dev/sdb3 /dev/md3, consisting of: /dev/sdc1 /dev/sdd1 /dev/md4, consisting of: /dev/sde1 /dev/sdf1 As you can see, md0, md1 and md2 all use the same 2 drives (split into 3 partitions). I also have to note that this is done via ubuntu software raid, not hardware raid. Today, the /md0 RAID1 array shows as degraded - it is missing the /dev/sdb1 drive. But since /dev/sdb1 is only a partition (and /dev/sdb2 and /dev/sdb3 are working fine), it's obviously not the drive that's gone AWOL, it seems the partition itself is missing. How is that even possible? And what could we do to fix it? My output of cat /proc/mdstat: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda2[0] sdb2[1] 24006528 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 1441268544 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[0] 1464710976 blocks super 1.2 [2/1] [U_] md3 : active raid1 sdd1[1] sdc1[0] 2930133824 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdf2[1] sde2[0] 2929939264 blocks super 1.2 [2/2] [UU] unused devices: <none> FYI: I tried the following: mdadm /dev/md0 --add /dev/sdb1 But got this error: mdadm: add new device failed for /dev/sdb1 as 2: Invalid argument Output of mdadm --detail /dev/md0 is: /dev/md0: Version : 1.2 Creation Time : Sat Dec 29 17:09:45 2012 Raid Level : raid1 Array Size : 1464710976 (1396.86 GiB 1499.86 GB) Used Dev Size : 1464710976 (1396.86 GiB 1499.86 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 15:55:07 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : lia:0 (local to host lia) UUID : eb302d19:ff70c7bf:401d63af:ed042d59 Events : 26216 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10  | Next Page >