Search Results

Search found 1900 results on 76 pages for 'xserve raid'.

Page 6/76 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • format/build raid 5 with one 4k drive, three 512b

    - by skidawgz
    I have 4 WD 1TB drives which I want to 4x1TB Raid5. I am not sure what course of action to take next. How do I configure my 4th drive (sde) to align with the rest? Will this affect performance? I rcv this msg (which brings me here to ask these question): The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted. fdisk -l shows: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf324ba09 Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x38bcc1f0 Device Boot Start End Blocks Id System /dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x570f77e7 Device Boot Start End Blocks Id System /dev/sdd1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xeb665e7b Device Boot Start End Blocks Id System

    Read the article

  • How to access a fake raid?

    - by maaartinus
    I have a fake raid, which I wanted to access using mdadm /dev/md0 -A -c 128 -l stripe --verbose /dev/sda /dev/sdc which should be right, as far as I understand the man page. But I get the message mdadm: option -l not valid in assemble mode leaving the offending option out leads to mdadm: failed to create /dev/md0 and (despite verbose) no more information. I'm assuming that -A requires some mdadm-specific header which is obviously missing. I probably need to use "build" instead of assemble, but from the description I'm really unsure whether this is a non-destructive operation. Is it? What should I exactly do? UPDATE I see I haven't made clear, that the array already exists as a fake-raid (I can't give the details about my mainboard now). It looks like doing nothing except for interleaving blocks, so I hoped it could be easily done using mdadm, too. Maybe I'm completely wrong, but all the info I've found was concerned with booting from fake-raid, what I don't really need. I'd be happy with a read access for now.

    Read the article

  • Installing 10.04 server on HP xw9400 Workstation with RAID 5

    - by Dave Long
    I have a workstation that was given to me that is a friggen powerhouse, so I figured I would set it up as my development and demo server. This is my first experience installing Ubuntu onto a RAID array and so far it has not been a fun one. I have been following the Advanced Installation guide for installing Ubuntu 10.04 server, and it says that there will be an option on the Partition Disks screen to manually create the partitions, but the only options I have are: Configure iSCSI volumes Undo changes to partitions Finish partitioning and write changes to disk Just before I got to that screen I got a message that said: One or more drives containing Serial ATA RAID configurations have been found. Do you wish to activate these RAID devices? It doesn't matter whether I answer yes or no to that, I still get the same Partition Disks screen. When I try to select Finish partitioning and write changes to disk I just get the No root file system error. Has anyone else experienced this, and how do I get past it? Can I not run Ubuntu on this machine?

    Read the article

  • recovering raid 0 hard disk

    - by Hiawatha
    I bumped to a huge (for me) problem. I was running dual boot system (win 7 / linux) and at some point I decided to test fedora ( I am new in Linux ). My hard disk conf: 3 hard disks each 1 TB, 2 set to raid 0 with windows running on it and 1 for linux. After installing it from live usb I found out that windows 7 is not in grub anymore and while booting shows raid error. I installed back Ubuntu and ran Disk Utility and checked now I have one disks (raid 0) failed (READ) error. First has 5 bad sectors and second has 1 bad sector. And now I dont know what to do and how to repair. further I dont know which data i could provide to get help. I tried ntfsfix and got this output: Mounting volume... NTFS signature is missing. FAILED Attempting to correct errors... NTFS signature is missing. FAILED Failed to startup volume: Invalid argument NTFS signature is missing. Trying the alternate boot sector Unrecoverable error Volume is corrupt. You should run chkdsk. #sudo ntfs-3g -o force,rw /dev/sdb /media/windows NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

    Read the article

  • Failed to install GRUB on a separate '/boot' partition on a fake RAID 0 (12.04LTS)

    - by gerben
    I'm having some problems getting GRUB configured for Ubuntu 12.04LTS on a fake RAID 0. I can either get the GRUB rescue prompt at startup, or just a GRUB prompt but I cannot boot to Ubuntu manually. How can I configure the GRUB to actually use the Ubuntu install? The steps taken: Installing Ubuntu on fake raid The Ubuntu installer cannot install Ubuntu on the drive. After defining the partitions to use it fails with "Error: ???", pressing OK terminates the installer. Therefore, I used GParted to configure the partitions: /dev/mapper/sil_agadaccfacbg : (the RAID configuration, created partition): /dev/mapper/sil_agadaccfacbg1:ext2, 200MiB, (with 'boot' flag) /dev/mapper/sil_agadaccfacbg3:ext2, 67.75GiB, (which will contain Ubuntu) /dev/mapper/sil_agadaccfacbg2:extended, 1.00GiB, (for swap) Contains: /dev/mapper/sil_agadaccfacbg5: unknown Because of the fake-RAID, I already mounted the destination partitions before running the Ubuntu installer: > mkdir /mnt/boot > sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot > mkdir /mnt/ubuntu > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu In the installer I chose the following partition usage: /dev/mapper/sil_agadaccfacbg1 ext2, mount at /boot (209MB) /dev/mapper/sil_agadaccfacbg3 ext2, mount at / (72751MB) /dev/mapper/sil_agadaccfacbg5 swap Device for boot loader installation: /dev/mapper/sil_agadaccfacbg, linux device-mapper (striped) (74.0GB) This will install Ubuntu, but will fail to install GRUB (it seems to use /dev/sda no matter which one I choose) Installing GRUB with dpkg-reconfigure I followed this guide, but adapted it for two partitions: sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu sudo mount --bind /dev /mnt/ubuntu/dev sudo mount --bind /proc /mnt/ubuntu/proc sudo mount --bind /sys /mnt/ubuntu/sys sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot sudo mount --bind /boot /mnt/boot sudo chroot /mnt/ubuntu dpkg-reconfigure grub-pc However, it does not ask where to install GRUB (I should choose /dev/mapper/sil_agadaccfacbg somewhere..) After reboot I get the GRUB rescue prompt with message no such device Installing GRUB with grub-install After the same mount commands as above, I continued with: > sudo grub-install --root-directory=/mnt/boot /dev/mapper/sil_agadaccfacbg This gives the following message: /usr/sbin/grub-probe: error: cannot find a device for /mnt/boot/boot/grub (is /dev mounted?) It does succeed when mounting just the boot partition : sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with: Installation finished. No error reported. After reboot I get the GRUB console, with welcome text. Attempting to manually start Ubuntu: ls (hd0) (hd0,msdos3) : (Ubuntu install partition) (hd0,msdos1) : (Ubuntu boot partition) (hd1) (hd1,msdos1) : (Ubuntu live USB) ls (hd0,msdos3)/ contains: - vmlinuz - lib/ - tmp/ - initrd.img - mnt/ - var/ - proc/ - boot/ - root/ - etc/ - run/ - media/ - sbin/ - bin/ - selinux/ - dev/ - srv/ - home/ - sys/ ls (hd0,msdos1)/ contains: -grub/ -boot/ -initrd.img-3.8.0-29-generic -vmlinuz-3.8.0.29-generic -config-3.8 linux (hd0,msdos3)/vmlinuz This returns "error: out of disk" Installing GRUB on Ubuntu partition with grub-install > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt > sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with message: > Installation finished. No error reported. After reboot get the message "error: out of disk" and the GRUB rescue prompt. Configuring GRUB with grub-mkconfig Attempting to run grub-mkconfig with different destinations results in the same message: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Remarks: Initially I didn't use a separate /boot partition, but the GRUB install then also failed. Because some mention that a small partition at the beginning of the drive is necessary on old machines, I retried with a /boot partition This is a single boot (no other OS's installed/used)

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Will Ubuntu break my RAID 0 array?

    - by Chad
    I am upgrading an older machine today with new Motherboard, RAM, and CPU. Then I am going to do a fresh install of Ubuntu 64bit. Currently the old machine has an 80gb system drive, and a 4TB RAID 0 array. The old Motherboard has no SATA ports, so I used a SATA card. Ubuntu set up the old RAID array, will it still recognize the array on a newer machine? Are there any steps I should take to ensure the array isn't damaged? It's non-crucial data, but I would rather not start over if it can be avoided. Thanks.

    Read the article

  • Well supported Hardware Raid Controller

    - by ftiaronsem
    Hello alltogether I am currently planning to buy a hardware-raid controller. This became necessary since I am running Linux and Windows in parallel and now need the redundancy for both OS (Im am going to use RAID1 / Mirroring). Therefore I am searching for a hardware raid controller which is well supported by linux / ubuntu (reporting smart values, stats for the harddrives, etc...). This controller should have four sata ports and if possible it should fit in a PCIE-1x Slot. I would greatly appreciate, if you could suggest some devices. Thanks in advance

    Read the article

  • Ubuntu 13.10 software raid

    - by Piotr Belniak
    I had already Ubuntu OS installed on my desktop PC, where the software RAID 5 is configured ( 3 partitions /, swap and home ). This system was upgraded from the 11.04 till 13.04, it was quite messy, so I decided to install fresh system on existing partitions. 1st of all i found that there is no alternate version of the installer ( which i used to create previous installation ), so i stared with the regular image. I installed mdadm tools, assemble the partitions - fdisk are showing them properly - so i'm starting the installation - and everything i going fine until the GRUB instalation - this part fails - regardless of which partition i use as a target. From the other hand, neither OpenSUse and Ubuntu 12.04 alternate does not have any problems with installing the GRUB - on this configuration, unfortunatelly Ubuntu 12.04 - 12.10 upgrade is failing bacause of some Xorg issues ;(. Maybe someone has an experience with installation of ubuntu 13.10 GRUB on the RAID 5 partitions - and could give me a hint, how to solve my problem. Thanks in advance, Piotr

    Read the article

  • How does btrfs RAID work in degraded mode?

    - by turbo
    My idea was that (using loopback devices) it works like this Create the raid array sudo mkfs.btrfs -m raid1 -d raid1 /dev/loop1 /dev/loop2 You mount them sudo mount /dev/loop1 /mnt and mark them touch goodcondition You unmount and simulate disk failure (remove disk or delete loopback device loop2 in my case) You mount degraded -o degraded and mark again touch degraded You add the bad disk again sudo btrfs dev add /dev/loop2 You rebalance sudo btrfs fi ba /mnt And Raid 1 should work again. But that's not the case. sudo btrfs fi show: Total devices 3 FS bytes used 28.00KB devid 3 size 4.00GB used 264.00MB path /dev/loop1 devid 2 size 4.00GB used 272.00MB path /dev/loop2 *** Some devices missing The file degraded lives on loop1 but not on loop2 when loop2 is mounted in degraded mode. Why is that?

    Read the article

  • Failed install 12.04 on Intel Hardware Raid with Large Partition (> 2TB)

    - by Michael Wiles
    I have Intel Hardware Raid on the motherboard. I have 10 2 TB HDD that I've configured as RAID 1+0 to be one big 8 TB HDD. Now I'm trying to install ubuntu 12.04 on it. After installing with default desktop installation disk I get a blank screen with a cursor flashing. If I try the alternate guided partitioning option I get error: out of disk. and the grub prompt. If I boot with the rescue disk or such like I can drop into a shell and view the disk. Everything also installs without an issue. Don't know what to do...

    Read the article

  • Ubuntu installation accuses false RAID

    - by rOim
    I'm trying to install a Ubuntu 12.04/Win7 dual boot on my machine (with 2 750GB HDs). Problem is, during the partitioning, the installation says I'm using RAID, and shows only one HD, with 1.5TB. I have, however, disabled raid on the setup, and installed Windows 7 only on HD 1, and want to keep Ubuntu to HD 2. I'm afraid to "resize" the partitions using the installer since that could mess my win7 installation. I did try to use the alternate installer, but had the same results.

    Read the article

  • Can I create a hybrid software-RAID array with disks of different sizes?

    - by stueng
    Products such as Synology offer something called Synology Hybrid RAID http://www.synology.com/us/products/features/RAID.php This RAID type allows you to make best use of your disks available by using all the disk space available as long as at least two disks share the same increased size where a typical RAID setup would simply "throw away" the extra space I would like to build a NAS with 4 disks available. I will begin by populating it with 3 X 3TB to give me 6TB usable. By the time I have filled this 6TB I imagine that 4TB disks will have come down in price, so at this stage I would add a 4th 4TB disk to give me an additional 3TB of space. When I next run out of space I will change one of the original 3TB disks with a 4TB disk giving me an additional 1TB of space. This is not possible with a typical RAID configuration, only with these "hybrid RAID" types I am wondering if I can acheive a similar "hybrid RAID" with Ubuntu? or another linux distro?

    Read the article

  • Well supported Hardware Raid Controller

    - by ftiaronsem
    I am currently planning to buy a hardware-raid controller. This became necessary since I am running Linux and Windows in parallel and now need the redundancy for both OS (Im am going to use RAID1 / Mirroring). Therefore I am searching for a hardware raid controller which is well supported by linux / ubuntu (reporting smart values, stats for the harddrives, etc...). This controller should have four sata ports and if possible it should fit in a PCIE-1x Slot. I would greatly appreciate, if you could suggest some devices. Thanks in advance

    Read the article

  • Not enough components to start the RAID array?

    - by urig
    I'm trying to retrieve data from a "Western Digital MyBook World Edition (white light)" NAS device. This is basically an embedded Linux box with a 1TB HDD in it formatted in ext3. It stopped booting one day for no apparent reason. I have extracted the HDD from the NAS device and installed it in a desktop machine running Ubuntu 10.10 in the hope of accessing the files on the drive. Unfortunately, Ubuntu has not been able to mount the drive automatically. Having started up Disk Utility I see the drive as a multi disk device called "Array (Array)" showing Metadata Version 0.90.0. The device state is: "Not Running, not enough components to start". When I click the "Start RAID Array" button I get an error saying: "Not enough components to start the RAID array". Can you please tell me which components are missing and how to install them to get access to the drive's filesystem?

    Read the article

  • Ubuntu 14.04 install on hardware with RAID

    - by nolak
    I've just built a new machine using an ASUS Mobo with 8GB of RAM and set up RAID 10 with 4 new HDD. I want to install Ubuntu 14.04 on it and so far have tried to do so 3 times with no luck. The installation seems to work fine but after it has finished it restarts and all I get is a black screen. Is there something I am missing? I've completed the install on other machines before without issue I just never tried it on a machine with RAID set up.

    Read the article

  • Red Hat 5.3 on HP Proliant DL380 G5 and failed drive on RAID controller

    - by thinkdreams
    I have a development ERP server here in my office that I assist with support on, and originally the DBA requested a single drive setup for some of the drives on the server. Thus the hardware RAID controller (an HP embedded controller) looks like: c0d0 (2 drive) RAID-1 c0d1 (2 drive) RAID-1 c0d2 (1 drive) No RAID <-- Failed c0d3 (1 drive) No RAID c0d4 (1 drive) No RAID c0d5 (1 drive) No RAID c0d2 has failed. I replaced the drive immediately with a spare using the hot-swap, but the c0d2 continues to mark itself as failed, even when I umount the partition. I'm loathe to reboot the server since I'm concerned about the server coming back up in rescue mode but I'm afraid that's the only way to get the system to re-read the drive. I assumed there was some sort of auto-detection routine for this, but I haven't been able to figure out the proper procedure. I have installed the HP ACU CLI utilties, so I can see the hardware RAID setup. I'd really like to find out what the proper procedure should have been, where I went wrong, and how to correct it now. Obviously this goes without saying I should NOT have listened to the DBA and set the drives up as RAID-1 throughout as was my first instinct. He wasn't worried about data loss, but it sure would have been easier to replace the failed drive. :)

    Read the article

  • What effect does RAID stripe size have on read-ahead settings?

    - by stbrody
    I'm trying to figure out the correct read-ahead values to set on a RAID10 array, and I'm wondering if the RAID stripe size should factor into my considerations. I've heard conflicting information about this in the past. I once heard that you should always set your read-ahead value to a multiple of the RAID stripe size, and never below the stripe size, because that is the minimum amount of data the RAID controller will ever try to read at once. Someone else told me, however, that setting read-ahead below the stripe size is fine, and can, in fact, increase the amount of parallel reads you can do across devices in the array, increasing performance and decreasing load on the array. So which is it? Do read-ahead settings that aren't multiples of the stripe size make sense or not?

    Read the article

  • Recover Intel Matrix Raid Configuration

    - by Catalin DICU
    Hello, I had 2 HDDs in Intel Matrix Raid configuration on a motherboard with intel ICH9R. I had some RAID 0 partitions and one RAID 1 partition. Somehow when replacing my videocard I partially unpulugged the power connector from one of the HDD. I booted and only one disk was showing. So I turned off the PC and correctly plugged the power connector and how both HDDs are showing as "Non-Raid Disk" Is there a way to restore the raid configuration from before ? In fact I don't really remember how my partitions where configured, I had 2x100Gb + 1x296Gb in RAID 0 and one 50Gb in RAID 1 (using 2x320Gb HDDs) but I'm not sure how many volumes and how the partitions where allocated on the volumes. Is there a tool to find that ? Thanks

    Read the article

  • Linux Software RAID: How to fsck on hard drive?

    - by Rick-Rainer Ludwig
    We have a Linux server running with Software RAID1. We see some issues in /var/log/messages like: unreadable sector. I want to perform a complete fsck on the drive to get some more information, but a fsck /dev/md0 brings a clean due to the Software RAID layer in between. How can I check the real hard drive? Do I need to disassemble the whole RAID? How do I deal with the inconsistency in the partition due to the additional Software RAID header? Does anyone have a good idea for this?

    Read the article

  • Looking for an actual experience of RAID 5 2 drive failure?

    - by Brian
    I'm wondering if anyone has any personal experience of RAID 5 2 drive failure with large drives? As I understand it, the theory is that with large 1-2TB drives, if one drive fails in the raid set, it needs to rebuild everything so is thus hitting all the other drives very hard, and the chance of another failure goes up, especially if the drives were from the same manufacturing batch. And if you lose another drive, you lose all the data. This is usually explained after the statement "RAID is not backup" which I agree with. The theory of this makes sense, and I understand it, but does it really happen?

    Read the article

  • Raid 5 with 4 disks on Debian automatically creates a spare drive

    - by Razer
    I'm trying to to create a RAID 5 with 4x 2TB disks on Debian 6. I followed the instructions from: http://zackreed.me/articles/38-software-raid-5-in-debian-with-mdadm I created the raid with following command: sudo mdadm --create --verbose /dev/md0 --auto=yes --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 After creating the RAID mdadm --detail /dev/md0 shows me: /dev/md0: Version : 1.2 Creation Time : Mon Jun 11 18:14:26 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 11 18:14:26 2012 State : clean, degraded Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : rsserver:0 (local to host rsserver) UUID : a68c3c99:1ef865e9:5a8a7bdc:64710ed8 Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 0 0 3 removed 4 8 65 - spare /dev/sde1 Why is there a spare drive? I didn't create one. I don't want to use a spare drive.

    Read the article

  • How to move a windows machine properly from RAID 1 to raid 10? [migrated]

    - by goober
    Goal I would like to add two more hard drives to my current RAID 1 setup and create a RAID 0 setup on top of the two RAID 1 setups (which I believe is referred to as "RAID 10"). Components Involved Intel P68 Chipset Motherboard 4 SATA ports that can be configured for Raid An intel SSD cache that sits in front of the RAID, and a 64 GB SSD configured in that manner Two 1TB HDDs configured in RAID 1 OS: Windows 7 Professional Resources Consulted so far I found a great resource on LinuxQuestions.org for a good "best practices" process for Linux machines, but I'd like to develop a similar process that I know works on Windows Machines.

    Read the article

  • Linux/OS X dualboot on a Macbook Pro with RAID

    - by GaretJax
    I'd like to install Gentoo Linux on my Macbook Pro by keeping my current OS X installation. I currently have OS X installed on a RAID 0 on two 160GB Intel SSDs and I'd like to create a new partition for Gentoo alongside OS X without losing the RAID setup but, from what I read on the net, Apple's software RAID is poorly (read "not at all") supported: BootCamp refuses to create a windows partition on a RAID volume rEFIt is not able to boot an OS from a software RAID even Apple's recovery partition for Lion can't be created on a RAID volume Is there a way to dual boot my Macbook while keeping the RAID solution?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >