Search Results

Search found 22986 results on 920 pages for 'array difference'.

Page 222/920 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • How to get rid of a stubborn 'removed' device in mdadm

    - by T.J. Crowder
    One of my server's drives failed and so I removed the failed drive from all three relevant arrays, had the drive swapped out, and then added the new drive to the arrays. Two of the arrays worked perfectly. The third added the drive back as a spare, and there's an odd "removed" entry in the mdadm details. I tried both mdadm /dev/md2 --remove failed and mdadm /dev/md2 --remove detached as suggested here and here, neither of which complained, but neither of which had any effect, either. Does anyone know how I can get rid of that entry and get the drive added back properly? (Ideally without resyncing a third time, I've already had to do it twice and it takes hours. But if that's what it takes, that's what it takes.) The new drive is /dev/sda, the relevant partition is /dev/sda3. Here's the detail on the array: # mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Oct 26 12:27:49 2011 Raid Level : raid1 Array Size : 729952192 (696.14 GiB 747.47 GB) Used Dev Size : 729952192 (696.14 GiB 747.47 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 12 17:48:53 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 2fdbf68c:d572d905:776c2c25:004bd7b2 (local to host blah) Events : 0.34665 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - spare /dev/sda3 If it's relevant, it's a 64-bit server. It normally runs Ubuntu, but right now I'm in the data centre's "rescue" OS, which is Debian 7 (wheezy). The "removed" entry was there the last time I was in Ubuntu (it won't, currently, boot from the disk), so I don't think that's not some Ubuntu/Debian conflict (and they are, of course, closely related). Update: Having done extensive tests with test devices on a local machine, I'm just plain getting anomalous behavior from mdadm with this array. For instance, with /dev/sda3 removed from the array again, I did this: mdadm /dev/md2 --grow --force --raid-devices=1 And that got rid of the "removed" device, leaving me just with /dev/sdb3. Then I nuked /dev/sda3 (wrote a file system to it, so it didn't have the raid fs anymore), then: mdadm /dev/md2 --grow --raid-devices=2 ...which gave me an array with /dev/sdb3 in slot 0 and "removed" in slot 1 as you'd expect. Then mdadm /dev/md2 --add /dev/sda3 ...added it — as a spare again. (Another 3.5 hours down the drain.) So with the rebuilt spare in the array, given that mdadm's man page says RAID-DEVICES CHANGES ... When the number of devices is increased, any hot spares that are present will be activated immediately. ...I grew the array to three devices, to try to activate the "spare": mdadm /dev/md2 --grow --raid-devices=3 What did I get? Two "removed" devices, and the spare. And yet when I do this with a test array, I don't get this behavior. So I nuked /dev/sda3 again, used it to create a brand-new array, and am copying the data from the old array to the new one: rsync -r -t -v --exclude 'lost+found' --progress /mnt/oldarray/* /mnt/newarray This will, of course, take hours. Hopefully when I'm done, I can stop the old array entirely, nuke /dev/sdb3, and add it to the new array. Hopefully, it won't get added as a spare!

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • What's the difference in content between the Ubuntu and Lubuntu Software Centers? Why aren't winners of the app showdown available in the latter?

    - by vasa1
    What's the difference in content between the Ubuntu and Lubuntu Software Centers? I've seen this Are softwares installable on ubuntu also installable on lubuntu? which indicates that the content should be the same. However, I looked for Ridual, OrthCal, Cuttlefish and MenuLibre. While all four are available in the Ubuntu Software Center, they aren't listed in the Lubuntu Software Center. Will they eventually be listed? Who decides? I'm asking because one of the criteria in the apps showdown, if I understood correctly, was desktop integration. If by "desktop", Unity is specifically implied then it would make sense not including them in the Lubuntu Software Center. Is that correct? I'm on 12.04.

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • How to do RLE (run length encoding) in C# on a byte array?

    - by CuriousCoder
    I am trying to XOR two bitmap files (their byte arrays) to produce a byte array that can be used to change image A into image B or vice versa. I am sending this over the network so I would like to do some basic compression before this happens. Is there a way to do RLE (run length encoding) in C# (using a built-in, or fast reliable 3rd party library) on a byte array for this purpose? Notes: If you are going to suggest an alternative to my approach please keep in mind that the decompression and transformation on the remote machine has to be as quick and efficient as possible.

    Read the article

  • better to wait Drupal 7 to start learning Drupal, or no big difference with Drupal 6?

    - by artmania
    Hi friends, I'm going to start some projects in next weeks. I currently have my custom cms, but i wanna go for opensource cms solution from now on. and as i researched Drupal is COOL for any kind of project from small scope to most complex ones. I have never used Drupal before, I know nothing about it yet. Would it worth to work on Drupal 6 for now, or better to wait for Drupal 7? is there any big difference about the way it works. i would feel silly to spend days-weeks to learn Drupal 6 now and after few months when Drupal 7 beta is out, a new learning curve and having troubles to upgrade to Drupal 7 my current Drupal 6 projects? Thanks a lot!! appreciate advises!

    Read the article

  • What is the difference in WCF when using KnownType and ServiceKnownType?

    - by Paul Speranza
    I have a service that returns an array of animal but the list can contain cats, dogs, etc, which all extend animal. I know I need to use either the KnownType or ServiceKnownType attribute, and on the entity class or the service class, respectively. What is the difference between the 2 attributes? I prefer the ServiceKnownType because it is applied on the service, exactly where it is needed and called for, as opposed to KnownType which is applied on my entity. To me applying it on the entity class means knowing too far ahead how my entity class is being used. For now I have it on my entity and it works like a charm, but I am looking for guidance here as to best practices and usefullness. Thanks, Paul Speranza

    Read the article

  • Convert byte array from Oracle RAW to System.Guid?

    - by Cory McCarty
    My app interacts with both Oracle and SQL Server databases using a custom data access layer written in ADO.NET using DataReaders. Right now I'm having a problem with the conversion between GUIDs (which we use for primary keys) and the Oracle RAW datatype. Inserts into oracle are fine (I just use the ToByteArray() method on System.Guid). The problem is converting back to System.Guid when I load records from the database. Currently, I'm using the byte array I get from ADO.NET to pass into the constructor for System.Guid. This appears to be working, but the Guids that appear in the database do not correspond to the Guids I'm generating in this manner. I can't change the database schema or the query (since it's reused for SQL Server). I need code to convert the byte array from Oracle into the correct Guid.

    Read the article

  • Performance: use a BinaryReader on a MemoryStream to read a byte array, or read directly?

    - by Virtlink
    I would like to know whether using a BinaryReader on a MemoryStream created from a byte array (byte[]) would reduce performance significantly. There is binary data I want to read, and I get that data as an array of bytes. I am currently deciding between two approaches to read the data, and have to implement many reading methods accordingly. After each reading action, I need the position right after the read data, and therefor I am considering using a BinaryReader. The first, non-BinaryReader approach: object Read(byte[] data, ref int offset); The second approach: object Read(BinaryReader reader); Such Read() methods will be called very often, in succession on the same data until all data has been read. So, using a BinaryReader feels more natural, but has it much impact on the performance?

    Read the article

  • In Protobuf-net how can I pass an array of type object with objects of different types inside, knowi

    - by cloudraven
    I am trying to migrate existing code that uses XmlSerializer to protobuf-net due to the increased performance it offers, however I am having problems with this specific case. I have an object[] that includes parameters that are going to be sent to a remote host (sort of a custom mini rpc facility). I know the set of types from which these parameters can be, but I cannot tell in advance in which order they are going to be sent. I have three constraints. The first is that I am running in Compact Framework, so I need something that works there. Second, as I mentioned performance is a big concern (on the serializing side) so I would rather avoid using a lot of reflection there if possible. And the most important is that I care about the order in which this parameters were sent. Using XmlSerializer it was easy just adding XmlInclude, but for fields there is nothing equivalent as far as I know in Protobuf-net. So, is there a way to do this? Here is a simplified example. [Serializable] [XmlInclude(typeof(MyType1)), XmlInclude(typeof(MyType2)), XmlInclude(typeof(MyType3)) public class Message() { public object[] parameters; public Message(object[] parms) { parameters = parms; } } Message m = new Message(new object[] {MyType1(), 33, "test", new MyType3(), new MyType3()}); MemoryStream ms = new MemoryStream(); XmlSerializer xml = new XmlSerializer(typeof(Message)); xml.Serialize(ms,xml); That will just work with XmlSerializer, but if I try to convert it to protobuf-net I will get a "No default encoding for Object" message. The best I came up with is to use generics and [ProtoInclude] as seen in this example. Since I can have different object types within the array this doesn't quite make it. I added a generic List for each potential type and a property with [ProtoIgnore] with type object[] to add them and get them. I have to use reflection when adding them (to know in which array to put each item) which is not desirable and I still can't preserve the ordering as I just extract all the items on each list one by one and put them into a new object[] array on the property get. I wonder if there is a way to accomplish this?

    Read the article

  • Is there any difference between null and 0 when assigning to pointers in unsafe code?

    - by Eloff
    This may seem odd, but in C (size_t)(void*)0 == 0 is not guaranteed by the language spec. Compilers are allowed to use any value they want for null (although they almost always use 0.) In C#, you can assign null or (T*)0 to a pointer in unsafe code. Is there any difference? (long)(void*)0 == 0 (guaranteed or not? put another way: IntPtr.Zero.ToInt64() == 0) MSDN has this to say about IntPtr.Zero: "The value of this field is not equivalent to null." Well if you want to be compatible with C code, that makes a lot of sense - it'd be worthless for interop if it didn't convert to a C null pointer. But I want to know if IntPtr.Zero.ToInt64() == 0 which may be possible, even if internally IntPtr.Zero is some other value (the CLR may or may not convert null to 0 in the cast operation) Not a duplicate of this question

    Read the article

  • Why array size in llvm created by llvm's internal tools is not good enough for llvm?

    - by vava
    I'm getting strange error message from the following code: ArrayType * arrayType = ArrayType::get(Type::getInt32Ty(ctx), 0); stack = builder->CreateAlloca(arrayType, ConstantArray::get(arrayType, std::vector<Constant *>())); This code compiles fine but it gives me llvm::Value* getAISize(llvm::LLVMContext&, llvm::Value*): Assertion `Amt-getType() == Type::getInt32Ty(Context) && "Malloc/Allocation array size is not a 32-bit integer!"' failed. during execution. What's interesting, is that I never said of what type array size should be, it was created somewhere inside helper functions. So I naturally see no good reason for assert to be there. But I must be doing something wrong anyway, can you help me spot the problem? Thanks in advance. PS. LLVM, as good as it is, lacking documentation tremendously.

    Read the article

  • How do I store an array of a particular type into my settings file?

    - by joebeazelman
    For some reason, I can't seem to store an array of my class into the settings. Here's the code: var newLink = new Link(); Properties.Settings.Default.Links = new ArrayList(); Properties.Settings.Default.Links.Add(newLink); Properties.Settings.Default.Save(); In my Settings.Designer.cs I specified the field to be an array list: [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public global::System.Collections.ArrayList Links { get { return ((global::System.Collections.ArrayList)(this["Links"])); } set { this["Links"] = value; } } For some reason, it won't save any of the data even though the Link class is serializable and I've tested it.

    Read the article

  • 3 methods for adding a "Product" through Entity Framework. What's the difference?

    - by Kohan
    Reading this MSDN article titled "Working with ObjectSet (Entity Framework)" It shows two examples on how to add a Product.. one for 3.5 and another for 4.0. http://msdn.microsoft.com/en-us/library/ee473442.aspx Through my lack of knowledge I am possibly completely missing something here, but i never added a Product like this: //In .NET Framework 3.5 SP1, use the following code: (ObjectQuery) using (AdventureWorksEntities context = new AdventureWorksEntities()) { // Add the new object to the context. context.AddObject("Products", newProduct); } //New in .NET Framework 4, use the following code: (ObjectSet) using (AdventureWorksEntities context = new AdventureWorksEntities()) { // Add the new object to the context. context.Products.AddObject(newProduct); } I would not have done it either way and just used: // (My familiar way) using (AdventureWorksEntities context = new AdventureWorksEntities()) { // Add the new object to the context. context.AddToProducts(newProduct); } What's the difference between these three ways? Is "My way" just another way of using an ObjectQuery? Thanks, Kohan

    Read the article

  • Is there a difference between boost iostream mapped file and boost interprocess mapped file?

    - by Yijinsei
    I want to create a mapped binary file into memory; however I am not sure how to create the file to be mapped into the system. I read the documentation several times and realize there are 2 mapped file implementations, one in iostream and the other in interprocess. Do you guys have any idea on how to create a mapped file into shared memory? I am trying to allow a multi-threaded program to read an array of large double written in a binary file format. Also what is the difference between the mapped file in iostream and interprocess?

    Read the article

  • Whats the difference between Process and ProcessStartInfo in C#?

    - by JimDel
    Whats the difference between Process and ProcessStartInfo? Ive used both to launch external programs but there has to be a reason there are two ways to do it. Here are two examples. Process notePad = new Process(); notePad.StartInfo.FileName = "notepad.exe"; notePad.StartInfo.Arguments = "ProcessStart.cs"; notePad.Start(); and ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = "notepad.exe"; startInfo.Arguments = " ProcessStart.cs "; Process.Start(startInfo) Thanks

    Read the article

  • "EXC_BAD_ACCESS: Unable to restore previously selected frame" Error, Array size?

    - by Job
    Hi there, I have an algorithm for creating the sieve of Eratosthenes and pulling primes from it. It lets you enter a max value for the sieve and the algorithm gives you the primes below that value and stores these in a c-style array. Problem: Everything works fine with values up to 500.000, however when I enter a large value -while running- it gives me the following error message in xcode: Program received signal: “EXC_BAD_ACCESS”. warning: Unable to restore previously selected frame. Data Formatters temporarily unavailable, will re-try after a 'continue'. (Not safe to call dlopen at this time.) My first idea was that I didn't use large enough variables, but as I am using 'unsigned long long int', this should not be the problem. Also the debugger points me to a point in my code where a point in the array get assigned a value. Therefore I wonder is there a maximum limit to an array? If yes: should I use NSArray instead? If no, then what is causing this error based on this information? EDIT: This is what the code looks like (it's not complete, for it fails at the last line posted). I'm using garbage collection. /*--------------------------SET UP--------------------------*/ unsigned long long int upperLimit = 550000; // unsigned long long int sieve[upperLimit]; unsigned long long int primes[upperLimit]; unsigned long long int indexCEX; unsigned long long int primesCounter = 0; // Fill sieve with 2 to upperLimit for(unsigned long long int indexA = 0; indexA < upperLimit-1; ++indexA) { sieve[indexA] = indexA+2; } unsigned long long int prime = 2; /*-------------------------CHECK & FIND----------------------------*/ while(!((prime*prime) > upperLimit)) { //check off all multiples of prime for(unsigned long long int indexB = prime-2; indexB < upperLimit-1; ++indexB) { // Multiple of prime = 0 if(sieve[indexB] != 0) { if(sieve[indexB] % prime == 0) { sieve[indexB] = 0; } } } /*---------------- Search for next prime ---------------*/ // index of current prime + 1 unsigned long long int indexC = prime - 1; while(sieve[indexC] == 0) { ++indexC; } prime = sieve[indexC]; // Store prime in primes[] primes[primesCounter] = prime; // This is where the code fails if upperLimit > 500000 ++primesCounter; indexCEX = indexC + 1; } As you may or may not see, is that I am -very much- a beginner. Any other suggestions are welcome of course :)

    Read the article

  • How do I create an empty array/matrix in NumPy?

    - by Ben
    I'm sure I must be being very dumb, but I can't figure out how to use an array or matrix in the way that I would normally use a list. I.e., I want to create an empty array (or matrix) and then add one column (or row) to it at a time. At the moment the only way I can find to do this is like: mat = None for col in columns: if mat is None: mat = col else: mat = hstack((mat, col)) Whereas if it were a list, I'd do something like this: list = [] for item in data: list.append(item) Is there a way to use that kind of notation for NumPy arrays or matrices? (Or a better way -- I'm still pretty new to python!)

    Read the article

  • Is there any performance difference between Debug and Release?

    - by Lazlo
    I'm using MySql Connector .NET to load an account and transfer it to the client. This operation is rather intensive, considering the child elements of the account to load. In Debug mode, it takes, at most, 1 second to load the account. The average would be 500ms. In Release mode, it takes from 1 to 4 seconds to load the account. The average would be 1500ms. Since there is no #if DEBUG directive or the like in my code, I'm wondering where the difference is coming from. Is there a project build option I could change? Or does it have to do with MySql Connector .NET that would have different behaviors depending on the build mode?

    Read the article

  • what is the difference between "./somescript.sh" and ". ./somescript.sh"

    - by Peter
    This question may sounds silly to you. Today I was following some instructions to install a software in Linux. There was a script that needs to be run first. It set some environment variables. The instruction told me to execute . ./setup.sh, but I made a mistake by executing ./setup.sh. So the env was not set. Finally I noticed this and proceeded. I want to know what exactly is the difference between both? I am completely new to Linux so please be as elaborate as possible.

    Read the article

  • Iphone SDK array initlised at viewdidload and released at dealloc is causing leak?

    - by Skeep
    Hi All, I am new to iphone development and i am getting ready to submit my first app. I have an array initlised at viewdidload and released at dealloc, however it is causing a leak. My question is why would this happen is it because the dealloc has not run? My Code is as follows the NSMutableArray called listOfItems is referenced in my .h file for the UITableView. The listOfItems array is used to populate the table. In the viewDidLoad method i have this code listOfItems = [[NSMutableArray alloc] init]; and in the dealloc i have: [listOfItems release]; When watching leaks in instruments I get a NSArray leak which when I drill down points to the line: listOfItems = [[NSMutableArray alloc] init]; Am i putting all this in the right place? Thanks

    Read the article

  • What is the difference between Multiple R-squared and Adjusted R-squared in a single-variate least s

    - by fmark
    Could someone explain to the statistically naive what the difference between Multiple R-squared and Adjusted R-squared is? I am doing a single-variate regression analysis as follows: v.lm <- lm(epm ~ n_days, data=v) print(summary(v.lm)) Results: Call: lm(formula = epm ~ n_days, data = v) Residuals: Min 1Q Median 3Q Max -693.59 -325.79 53.34 302.46 964.95 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2550.39 92.15 27.677 <2e-16 *** n_days -13.12 5.39 -2.433 0.0216 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 410.1 on 28 degrees of freedom Multiple R-squared: 0.1746, Adjusted R-squared: 0.1451 F-statistic: 5.921 on 1 and 28 DF, p-value: 0.0216 Apologies for the newbiness of this question.

    Read the article

  • What's the difference between the 'DES' class and The 'DESCryptoServiceProvider' class?

    - by IbrarMumtaz
    All I can make out is that one of them is the BC for all 'DES' algorithms to be derived from and the later is a wrapper for the Cryptographic service provider implementation of the DES algorithm. The reason why I ask is that I am going over .Net Security and the MS official training book simply refers to the DES class but the another official MS book refers to the DESCrypto' class. What's the difference between these two? When would you use either of them? What do I need to know as far as the 70-536 exam is concerned. I am asking my question from an educational P.O.V as far as the 70-536 exam is concerned. Thanks In Advance. Ibrar

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >