Search Results

Search found 1864 results on 75 pages for 'raid'.

Page 4/75 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What will happen with my RAID5 after motherboard change?

    - by abatishchev
    Currently I have ASUS P5Q-EM and 3 HDD in RAID5 using it's on-board RAID controller Intel ICH10R. I want to bye a new motherboard, for example, Gigabyte GA-EQ45M-S2 which also have on-board RAID controller, but Intel ICH10DO. What will happen with my data on RAID5? Will I have to re-create the array from the scratch and lost all my data? Is such array a soft RAID or soft-hard? What if my current motherboard will broken? What will happen with my data?

    Read the article

  • RAID Read/Write Speed Gradually Slows

    - by Nalandial
    This is actually a server at home, but I felt it was sufficiently complicated as to not have it on SuperUser and could easily apply to a professional situation. I have a file server running Debian (Lenny 5.0.4), and it has an XFS LVM on top of a RAID 5 with the OS drive separate from the RAID. It's also running apache, samba, and postgresql. Side note: before anyone asks, I'm using RAID5 because I get more bang for the buck on raw drive space, and still have some fault tolerance. When the box is started (via shutdown or reboot) reading/writing to it's samba share maxes out the gigabit network connection. Over time, this slowly degrades eventually becoming < 10MB/s; however, when rebooted the speed returns to maxing out the connection. Why is this happening, and is there a way to 'clear' out whatever's causing it without taking the server down? Thanks in advance!

    Read the article

  • Making GRUB see RAID 0 under Ubuntu 10.10 LiveCD

    - by unknownthreat
    I just installed Windows 7 recently, and I expect that it would alter GRUB and it did. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". So what now? How can we make GRUB see RAID 0 under Ubuntu 10.10 LiveCD?

    Read the article

  • Linux software raid robustness for raid1 vs other raid levels

    - by Waxhead
    I have a raid5 running and now also a raid1 that I set up yesterday. Since raid5 calculates parity it should be able to catch silent data corruption on one disk. However for raid1 the disks are just mirrors. The more I think about it I figure that raid1 is actually quite risky. Sure it will save me from a disk failure but i might not be as good when it comes to protecting the data on disk (who is actually more important for me). How does Linux software raid actually store raid1 type data on disk? How does it know what spindle is giving corrupt data (if the disk(subsystem) is not reporting any errors) If raid1 really is not giving me data protection but rather disk protection is there some tricks I can do with mdadm to create a two disk "raid5 like" setup? e.g. loose capacity but still keep redundancy also for data!?

    Read the article

  • Simple mdadm RAID 1 not activating spare

    - by Nick Liu
    I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin. The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync. Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1 watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing. After the progress bar was no longer showing, cat /proc/mdstat displays: md0 : active raid1 sdb1[2](S) sdc1[1] 1953511288 blocks super 1.2 [2/1] [U_] And sudo mdadm --detail /dev/md0 shows: /dev/md0: Version : 1.2 Creation Time : Sun May 27 11:26:05 2012 Raid Level : raid1 Array Size : 1953511288 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon May 28 11:16:49 2012 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : Deltique:0 (local to host Deltique) UUID : 49733c26:dd5f67b5:13741fb7:c568bd04 Events : 32365 Number Major Minor RaidDevice State 1 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 2 8 17 - spare /dev/sdb1 I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1. UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new. As of the latest edit, I assembled the array with this command: sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1 The output was: mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. Rebuilding looks like it's progressing normally: md0 : active raid1 sdc1[1] sdb1[2] 1953511288 blocks super 1.2 [2/1] [U_] [>....................] recovery = 0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec unused devices: <none> I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before. UPDATE (31 May 2012): Yeah, it's still a spare. Ugh! UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command: sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1 Waiting on the rebuild now... My questions are: Why isn't the spare drive becoming active sync? How can I make the spare drive become active?

    Read the article

  • Using old RAID configured disk after new disk has been used in the controller

    - by Narendra
    I have Dell Poweredge T100 server with Dell SAS 6 and two hard disk on RAID 1. Last week the server died including one RAID 1 hard disk. We sent the server for repair and the problem with PSU was fixed. But the repair guys also checked the RAID controller by configuring new RAID with their test hard disk. Now if I install one working RAID 1 disk and one new disk, will the RAID controller let me continue my old RAID 1 and resync the new disk and continue? What I fear is the RAID controller will want the test hard from repair guys. Thus I have to re configure RAID 1 forcing me to wipe the working disc. If so, I've to backup the working disc, reconfigure RAID 1 and reinstall? Or is there better way? Note: I'm using DELL SAS confiugratio utility to manage RAID. (Press CTRL+C after BIOS)

    Read the article

  • misaligned raid partition in Ubuntu 10.04

    - by Linux Jedi
    I attached two identical hard drives to my linux machine. Then using gparted I formated the first 1024 mb at the beginning of each drive as linux swap space. Then I went into system-administration-disk utility. In there I went to file-create-RAID array. I selected the remaining space in each of the two identical hard drives and created a striped raid array. After the array was created, a warning message appeared. It said "The partition is misaligned by 522240 bytes. This may result in very poor performance. Repartitioning is suggested." What do I do now? As far as I can tell, the partitions are identical.

    Read the article

  • slow software raid

    - by Jure1873
    I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1

    Read the article

  • Replacing a non-failing drive in a RAID-0 array

    - by TallFurryMan
    I have a Windows 7 machine booting on a RAID-0 pair of 500GB disks, controlled by an ICH9R. One of those was indicating an end-To-end SMART failure, so I added a spare disk as a temporary workaround, before receiving another to replace the failing one. The RAID-0 rebuilt on the spare and dropped the failing one from the array, as expected. Now that I received the new drive, what are my options to reintegrate it in the array? My first thought was to simply clone the temporary disk to the new one while the array is offline, but shouldn't there be a way to force a second rebuild, just as if the temporary drive had a warning, and drop that temporary from the array?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Assembling Software RAID in Live CD for data recovery

    - by Maletor
    I need help recovering some data that's on my RAID which is on a LVM on my server running Ubuntu. What happened was I deleted the logical volume that controlled my swap space which was on a partition on drives sda2, sdb2, sdc2, and sdd2 in RAID1. This foobared my whole system for one reason or another. Booting leave me with grub rescue and an error saying that it is an unknown filesystem. When I boot to a live cd I can see my RAID arrays and I can even start them up. However, it doesn't appear to mount them anywhere so I can't see the data. I am in the live cd now and I have done sudo apt-get install mdadm lvm2 so it should be mounting them correctly. I just can't see why it wouldn't. Please any help is appreciated here. Here is some output. By the way, there are 3 RAIDs, 1) /boot 100mb RAID1, 2) swap 10gb RAID1, 3) root 990GB RAID5 ubuntu@ubuntu:~$ df -h Filesystem Size Used Avail Use% Mounted on aufs 124M 101M 18M 86% / none 2.0G 324K 2.0G 1% /dev /dev/sde1 2.0G 826M 1.2G 42% /cdrom /dev/loop0 667M 667M 0 100% /rofs none 2.0G 164K 2.0G 1% /dev/shm tmpfs 2.0G 28K 2.0G 1% /tmp none 2.0G 92K 2.0G 1% /var/run none 2.0G 0 2.0G 0% /var/lock none 2.0G 0 2.0G 0% /lib/init/rw /dev/md1 91M 73M 15M 84% /media/5ac3dbf1-a6c5-409c-96ae-edc6e27992c7 ubuntu@ubuntu:~$ cat /etc/fstab aufs / aufs rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0 /dev/sda2 swap swap defaults 0 0 /dev/sdb2 swap swap defaults 0 0 /dev/sdc2 swap swap defaults 0 0 /dev/sdd2 swap swap defaults 0 0

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Windows Software RAID 5 Drive Failure Notification

    - by Wayne Hartman
    I plan on creating a Windows software RAID 5 array but need to know when a drive goes bum. I don't plan on wanting to check the server every so often, so how can I have an email sent when a drive goes kapüt or otherwise has problems? Keying off the event log would be OK, but how does one set up notifications on it when the exact event ID(s) may not be known?

    Read the article

  • Force RAID to read "exiled" disk?

    - by user197015
    We have a RAID 6 array (Infortrend EonStor DS S16F) that recently had two disks fail. Immediately prior to replacing these two disks, a third, good, disk was accidentally ejected from the array. After reinserting this disk it is marked as "exiled" by the array's firmware, and so even after replacing the two failed disks with new ones the array refuses to rebuild the logical volume and remains inaccessible. Since the temporarily-ejected disk is still functional and nothing has been written to the array since it was ejected, it seems that it should theoretically be possible to recover all the data on the array, but how can we convince the array to use the data from the "exiled" disk? Thanks for any help or advice you can offer.

    Read the article

  • RAID 0 Volatile Volume Cache Mode configuration

    - by SnippetSpace
    I discovered that in IRST there is an option to set a cache mode for my 3 ssd raid 0 array. I've read the documentation by Intel and have some questions: Are there any overall benefits/risks from enabling cache mode? As I'm on a laptop, would write back be recommended? I read it increases chance of data loss on power interruption. What is the difference between how windows handles data integrity and the intel driver? Read only mode seems to have the benefit of faster reads, does it have any downsides? Thanks for your help guys!

    Read the article

  • Access Western Digital My Book World II RAID array on my Ubuntu Linux

    - by ZeDalaye
    Hi, My WD My Book World II (Blue Rings) NAS has overheated, I think the motherboard is dead. I extracted the disks and plugged them in my desktop PC running Ubuntu Linux. The disks seems to be alive, they are spinning and the BIOS recognize them but Ubuntu is not able to boot as soon as these drives are plugged in. I got an initramfs shell after few minutes telling explaining that the root disk is not available. I suspect that one of my WD drives took the precedence on the system ? Considering that Ubuntu is able to boot and can see my Western Digital disks... is it possible to access the RAID 0 array ? How ? Many thanks for your help, regards, -- Pierre Yager

    Read the article

  • Mac OS X Server 10.6 - Apple's software mirrored RAID worth it?

    - by Arko
    Hi, I am installing an Intel Xserve (Quad core Xeon) with Snow Leopard Server (10.6) on two 80Gb 7200rpm SATA HDs. I created a mirrored RAID set using Disk Utility with those two drives, all went fine. I was then asking myself if this is really a good idea. I know that an hardware RAID system would be better, but what about this software RAID? Have you any feedback on this? Will it work fine if one HD breaks down? Does this affect performance? [UPDATE] In short: Hardware RAID is better than software RAID which is better than none. Thank you all for the answers, they were very helpful. Especially Gordon's script to monitor failures. As Apple's software RAID is pretty silent about a drive failure.

    Read the article

  • Monitor LSI 3ware raid controller on ESXi

    - by aseq
    This concerns a server that runs ESXi (v. 4.x or 5.x) installed on drives that are configured into a raid10 using an LSI 3ware 97050 raid controller. I would like to know if there is a way to monitor the LSI 3ware series of controllers, in particular the 9750, through ESXi. And to hopefully also run the monitoring daemon LSI provides. I know you can set up a cronjob to execute tw_cli through ssh on the ESXi server. However that's not really ideal. I am not using vcenter by the way. It would be nice to have more than just monitoring working, since the 3ware software has a very useful web client, besides tw_cli.

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • I've created a software RAID on SUSE EL 10 and now I need to monitor it.......

    - by Thomas B.
    I have created a Software Raid using the yast2 GUI on SUSE ES 10/11. The raid works great and it's a raid 5. I have 5 Drives they are cheap 2GB Cases that have 2 - 1TB Drives in each case (Serial ATA Drives) and I connect them in via Esata to the motherboard. The problem I have as this is "cheap" storage when of the the 5 drives goes out on the RAID I seem to have no logs of any issues and it get's harder and harder to write to it until it dies. I use SAMBA to mount the 4TB parition to my PC's in my home on a GIG network. My question is this, are there any good (Free) tools in Linux to monitor a raid or the drives on the raid to detect any problems??? I haven't found any yet and was just wondering if some exist.

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • RAID--0 " TWO " DRIVES SSD ONLY Should I use on-board / Software RAID OR a RAID Card / Control

    - by Wes
    I am looking at going with a TWO Drive Only SSD RAID-0 Configuration And was wondering if I would get better performance / Speed from the Use of a RAID Controller / Card Verses just using the Software RAID on my Mother Board. I have herd conflicting reports , Again I only Plan on Running " 2 " SSD Drives in RAID-0 Config I have No- problem spending the extra money for a good controller but only if I am going to benifit performance wise , Otherwise if there is no notable Gain I will just use the Software RAID that my HP-180-T came with Intel- 3.33 GHZ , 6-Core , 12-GB of DDR-3. I have a huge External drive for All Storage and am not concerned about Data loss just looking for pure speed. And if a Controller will benifit my performance Wht type of card would one suggest?

    Read the article

  • Explain difference in SQLIO numbers for RAID 0 versus RAID 5 over 6 disks

    - by markn
    When using the SQLIO benchmark tool on a 4-core Dell server with 6 15k 450GB (fast) drives, RAID 0, we found the max throughput was 2MB per second. But when configured as RAID 5, we get 30 MB per second. It seems that the RAID controller, Dell Perc 5i integrated controller, is maxing out the throughput per disk. With RAID 5, I expect to get a bump due to stripping, but not a 15x difference. Like good programmers, we suspect the hardware , but we could be missing something. This is predominately write traffic.

    Read the article

  • Resize a RAID 1 volume on OS X Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OS X 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >