Search Results

Search found 21914 results on 877 pages for 'array job'.

Page 207/877 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • How to get rid of a stubborn 'removed' device in mdadm

    - by T.J. Crowder
    One of my server's drives failed and so I removed the failed drive from all three relevant arrays, had the drive swapped out, and then added the new drive to the arrays. Two of the arrays worked perfectly. The third added the drive back as a spare, and there's an odd "removed" entry in the mdadm details. I tried both mdadm /dev/md2 --remove failed and mdadm /dev/md2 --remove detached as suggested here and here, neither of which complained, but neither of which had any effect, either. Does anyone know how I can get rid of that entry and get the drive added back properly? (Ideally without resyncing a third time, I've already had to do it twice and it takes hours. But if that's what it takes, that's what it takes.) The new drive is /dev/sda, the relevant partition is /dev/sda3. Here's the detail on the array: # mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Oct 26 12:27:49 2011 Raid Level : raid1 Array Size : 729952192 (696.14 GiB 747.47 GB) Used Dev Size : 729952192 (696.14 GiB 747.47 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 12 17:48:53 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 2fdbf68c:d572d905:776c2c25:004bd7b2 (local to host blah) Events : 0.34665 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - spare /dev/sda3 If it's relevant, it's a 64-bit server. It normally runs Ubuntu, but right now I'm in the data centre's "rescue" OS, which is Debian 7 (wheezy). The "removed" entry was there the last time I was in Ubuntu (it won't, currently, boot from the disk), so I don't think that's not some Ubuntu/Debian conflict (and they are, of course, closely related). Update: Having done extensive tests with test devices on a local machine, I'm just plain getting anomalous behavior from mdadm with this array. For instance, with /dev/sda3 removed from the array again, I did this: mdadm /dev/md2 --grow --force --raid-devices=1 And that got rid of the "removed" device, leaving me just with /dev/sdb3. Then I nuked /dev/sda3 (wrote a file system to it, so it didn't have the raid fs anymore), then: mdadm /dev/md2 --grow --raid-devices=2 ...which gave me an array with /dev/sdb3 in slot 0 and "removed" in slot 1 as you'd expect. Then mdadm /dev/md2 --add /dev/sda3 ...added it — as a spare again. (Another 3.5 hours down the drain.) So with the rebuilt spare in the array, given that mdadm's man page says RAID-DEVICES CHANGES ... When the number of devices is increased, any hot spares that are present will be activated immediately. ...I grew the array to three devices, to try to activate the "spare": mdadm /dev/md2 --grow --raid-devices=3 What did I get? Two "removed" devices, and the spare. And yet when I do this with a test array, I don't get this behavior. So I nuked /dev/sda3 again, used it to create a brand-new array, and am copying the data from the old array to the new one: rsync -r -t -v --exclude 'lost+found' --progress /mnt/oldarray/* /mnt/newarray This will, of course, take hours. Hopefully when I'm done, I can stop the old array entirely, nuke /dev/sdb3, and add it to the new array. Hopefully, it won't get added as a spare!

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Why everybody should do Sales!

    - by FelixWehmeyer
    I speak with many business students and ask them what job they want to get into. Most of them tell me they want a job in Marketing, Management Consulting or Finance. I hardly ever hear “Sales, that is what I want to do”, and I often wonder why. I would like to start with a quote from Zig Ziglar, a successful salesman: "Nothing happens until someone sells something." But to get back to the main point, why wouldn’t you want to get in sales? When people think of sales, they picture a typical salesman in their head and think that selling is scary and all about manipulating, pressuring and pushing someone into buying something they don’t need. Are these stereotypes accurate? I don’t believe so: So why should you want to be in sales? If you think about selling as providing the solution for the problem and talking about the benefits of making a decision, then every job in this world comes out of selling. In every job you deal with coworkers that you want to convince of your ideas or convincing your boss that the project you want to work on is good for the company.  These days, consumers and businesses are very well informed about services and products. When we are talking about highly complex products, such as IT solutions, businesses don’t accept your run-of-the-mill salesman who is pushing a sale. These are often long projects where salespeople have a consulting and leading role. Salespeople need to be able to consult companies and customers with their problem and convince a client that their solution is the best fit. Next to the fact that sales, is by far, not as scary and shady as you thought, there are a few points that will make you want to consider a sales career: Negotiating skills – When you are in sales you will learn how to negotiate. Salespeople learn to listen to their customers and try to make them happy, overcoming objections and come to a final agreement that both parties are happy with. Persistence/Challenge – As a salesperson you will often hear a negative answer, in a sales role you will start to embrace this and see a ‘no’ as a challenge not as a rejection. This attitude change can help you a lot in your career, but also in your personal life. You will become more optimistic and gain a go-getter attitude. Salary – As salespeople are seen as the moneymakers for the company, companies often reward their sales teams generously. Most likely in a sales role, you will receive a good basic salary and often you get nice bonuses on top of that based on your performance. Oracle is, for instance, the company that offers the highest average commission in the world. Further you can expect many other benefits as companies know that there is a high demand for good salespeople. Teamwork – Sales is a lot like having your own business, you are responsible for your own territory or set of clients. You are the one who is responsible for the revenue coming from that territory. So in order to gain revenue you will have to work together with many departments and people to make that happen. Every (potential) client could be seen as a different project, and you are the project leader. Understanding customers and the business – From any job that you choose sales will get you the most insight in the market. Salespeople are usually well-connected, talk with different customers and learn about the market and are up-to-date about all latest changes. Even if you want to change to a different role in the long run, you have a great head start as you understand the market and customers like no one else. Job security – Look at all the job postings out there. Many of them are sales-related. So if you want to have a steady job, plenty of choice and companies willing to invest in you, sales could be something for you.  Are you interested in exploring a sales career? At Oracle we are always looking for good sales professionals and fresh graduates who want to get into sales! For many languages such as Flemish, Dutch, German, French, Swedish and Norwegian (and more) we are currently looking for graduates who want to develop their career in Oracle. Please have a look at this article for the experience of a Business Development Consultant at Oracle in Dublin. Want to learn more about this job check out this link or send an email to jessica.ebbelaar-at-oracle.com! Have a look at our website http://campus.oracle.com for all of our other latest sales and non-sales vacancies!

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Is it possible to have a conditional formatting cell "visually cycle" through all the formats that evaluated true?

    - by Ben
    Like the title says, "In Excel, when a cell has multiple conditional formatting rules that evaluate true, is it possible to have the cell "visually cycle" through all the formats that evaluated true? If not, suggestions on what to do would be appreciated!" I'm creating an employee schedule for a business that has multiple job areas that need to have an employee assigned to cover. The schedule is currently set up with the date on the top row, employee list down the left column, and the employee's assigned "job area" cross-referencing with the date on the top row. Originally it was set up where if every required "job area" didn't have someone assigned to it, the date would (via conditional formatting) change to red. I've set it up now that if a condition isn't met, the date will change to the color of the "job area" that doesn't have an employee assigned to it. However, there are cases where multiple job areas don't have an employee assigned, but the date will only change color based on the first condition that isn't met. It'd be nice if there was some way for the date cell to cycle through the different colors that correspond to the job areas where no one is assigned. I have a hunch that's not possible though. If it is possible, I'd love to know how to do it. And if it isn't, if anyone has any suggestions on how I can modify the Excel sheet to make it easier to identify the job areas that don't have anyone assigned to them, I would appreciate it. FYI This schedule goes out months in advance.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • sharing build artifacts between jobs in hudson

    - by programming panda
    Hi I'm trying to set up our build process in hudson. Job 1 will be a super fast (hopefully) continuous integration build job that will be built frequently. Job 2, will be responsible for running a comprehensive test suite, at a regular interval or triggered manually. Job 3 will be responsible for running analysis tools across the codebase (much like Job 2). I tried using the "Advanced Projects Options use custom workspace" feature so that code compiled in Job 1 can be used in Job 2 and 3. However, it seems that all build artifacts remain inside that Job 1 workspace. I'm I doing this right? Is there a better way of doing this? I guess I'm looking for something similar to a build pipeline setup...so that things can be shared and the appropriate jobs can be executed in stages. (I also considered using 'batch tasks'...but it seems like those can't be scheduled? only triggered manually?) Any suggestions are welcomed. Thanks!

    Read the article

  • Advice on optimzing speed for a Stored Procedure that uses Views

    - by Belliez
    Based on a previous question and with a lot of help from Damir Sudarevic (thanks) I have the following sql code which works great but is very slow. Can anyone suggest how I can speed this up and optimise for speed. I am now using SQL Server Express 2008 (not 2005 as per my original question). What this code does is retrieves parameters and their associated values from several tables and rotates the table in a form that can be easily compared. Its great for one of two rows of data but now I am testing with 100 rows and to run GetJobParameters takes over 7 minutes to complete? Any advice is gratefully accepted, thank you in advanced. /*********************************************************************************************** ** CREATE A VIEW (VIRTUAL TABLE) TO ALLOW EASIER RETREIVAL OF PARMETERS ************************************************************************************************/ CREATE VIEW dbo.vParameters AS SELECT m.MachineID AS [Machine ID] ,j.JobID AS [Job ID] ,p.ParamID AS [Param ID] ,t.ParamTypeID AS [Param Type ID] ,m.Name AS [Machine Name] ,j.Name AS [Job Name] ,t.Name AS [Param Type Name] ,t.JobDataType AS [Job DataType] ,x.Value AS [Measurement Value] ,x.Unit AS [Unit] ,y.Value AS [JobDataType] FROM dbo.Machines AS m JOIN dbo.JobFiles AS j ON j.MachineID = m.MachineID JOIN dbo.JobParams AS p ON p.JobFileID = j.JobID JOIN dbo.JobParamType AS t ON t.ParamTypeID = p.ParamTypeID LEFT JOIN dbo.JobMeasurement AS x ON x.ParamID = p.ParamID LEFT JOIN dbo.JobTrait AS y ON y.ParamID = p.ParamID GO -- Step 2 CREATE VIEW dbo.vJobValues AS SELECT [Job Name] ,[Param Type Name] ,COALESCE(cast([Measurement Value] AS varchar(50)), [JobDataType]) AS [Val] FROM dbo.vParameters GO /*********************************************************************************************** ** GET JOB PARMETERS FROM THE VIEW JUST CREATED ************************************************************************************************/ CREATE PROCEDURE GetJobParameters AS -- Step 3 DECLARE @Params TABLE ( id int IDENTITY (1,1) ,ParamName varchar(50) ); INSERT INTO @Params (ParamName) SELECT DISTINCT [Name] FROM dbo.JobParamType -- Step 4 DECLARE @qw TABLE( id int IDENTITY (1,1) , txt nchar(300) ) INSERT INTO @qw (txt) SELECT 'SELECT' UNION SELECT '[Job Name]' ; INSERT INTO @qw (txt) SELECT ',MAX(CASE [Param Type Name] WHEN ''' + ParamName + ''' THEN Val ELSE NULL END) AS [' + ParamName + ']' FROM @Params ORDER BY id; INSERT INTO @qw (txt) SELECT 'FROM dbo.vJobValues' UNION SELECT 'GROUP BY [Job Name]' UNION SELECT 'ORDER BY [Job Name]'; -- Step 5 --SELECT txt FROM @qw DECLARE @sql_output VARCHAR (MAX) SET @sql_output = '' -- NULL + '' = NULL, so we need to have a seed SELECT @sql_output = -- string to avoid losing the first line. COALESCE (@sql_output + txt + char (10), '') FROM @qw EXEC (@sql_output) GO

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • XML Serialization and Deserialization in C#

    - by SOF User
    <job id="ID00004" name="PeakValCalcO"> <uses file="Seismogram_FFI_0_1_ID00003.grm" link="input" /> <uses file="PeakVals_FFI_0_1_ID00003.bsa" link="output" /> </job> <job id="ID00005" name="SeismogramSynthesis" > <uses file="FFI_0_1_txt.variation-s07930-h00000" link="input" /> <uses file="Seismogram_FFI_0_1_ID00005.grm" link="output" /> </job> Let say I have this XML I want to convert into .net Object how can do this i tried it but it doesn't work correct... public class jobs : List<job> { } public class job { public string id { get; set; } public string name { get; set; } public List<uses> Files { get; set; } } public class uses { public string file { get; set; } public string link { get; set; } } private void Form1_Load(object sender, EventArgs e) { XmlSerializer serializer = new XmlSerializer(typeof(jobs)); TextReader tr = new StreamReader("CyberShake_100.xml"); job b = (job)serializer.Deserialize(tr); tr.Close(); }

    Read the article

  • How to do RLE (run length encoding) in C# on a byte array?

    - by CuriousCoder
    I am trying to XOR two bitmap files (their byte arrays) to produce a byte array that can be used to change image A into image B or vice versa. I am sending this over the network so I would like to do some basic compression before this happens. Is there a way to do RLE (run length encoding) in C# (using a built-in, or fast reliable 3rd party library) on a byte array for this purpose? Notes: If you are going to suggest an alternative to my approach please keep in mind that the decompression and transformation on the remote machine has to be as quick and efficient as possible.

    Read the article

  • Convert byte array from Oracle RAW to System.Guid?

    - by Cory McCarty
    My app interacts with both Oracle and SQL Server databases using a custom data access layer written in ADO.NET using DataReaders. Right now I'm having a problem with the conversion between GUIDs (which we use for primary keys) and the Oracle RAW datatype. Inserts into oracle are fine (I just use the ToByteArray() method on System.Guid). The problem is converting back to System.Guid when I load records from the database. Currently, I'm using the byte array I get from ADO.NET to pass into the constructor for System.Guid. This appears to be working, but the Guids that appear in the database do not correspond to the Guids I'm generating in this manner. I can't change the database schema or the query (since it's reused for SQL Server). I need code to convert the byte array from Oracle into the correct Guid.

    Read the article

  • Performance: use a BinaryReader on a MemoryStream to read a byte array, or read directly?

    - by Virtlink
    I would like to know whether using a BinaryReader on a MemoryStream created from a byte array (byte[]) would reduce performance significantly. There is binary data I want to read, and I get that data as an array of bytes. I am currently deciding between two approaches to read the data, and have to implement many reading methods accordingly. After each reading action, I need the position right after the read data, and therefor I am considering using a BinaryReader. The first, non-BinaryReader approach: object Read(byte[] data, ref int offset); The second approach: object Read(BinaryReader reader); Such Read() methods will be called very often, in succession on the same data until all data has been read. So, using a BinaryReader feels more natural, but has it much impact on the performance?

    Read the article

  • In Protobuf-net how can I pass an array of type object with objects of different types inside, knowi

    - by cloudraven
    I am trying to migrate existing code that uses XmlSerializer to protobuf-net due to the increased performance it offers, however I am having problems with this specific case. I have an object[] that includes parameters that are going to be sent to a remote host (sort of a custom mini rpc facility). I know the set of types from which these parameters can be, but I cannot tell in advance in which order they are going to be sent. I have three constraints. The first is that I am running in Compact Framework, so I need something that works there. Second, as I mentioned performance is a big concern (on the serializing side) so I would rather avoid using a lot of reflection there if possible. And the most important is that I care about the order in which this parameters were sent. Using XmlSerializer it was easy just adding XmlInclude, but for fields there is nothing equivalent as far as I know in Protobuf-net. So, is there a way to do this? Here is a simplified example. [Serializable] [XmlInclude(typeof(MyType1)), XmlInclude(typeof(MyType2)), XmlInclude(typeof(MyType3)) public class Message() { public object[] parameters; public Message(object[] parms) { parameters = parms; } } Message m = new Message(new object[] {MyType1(), 33, "test", new MyType3(), new MyType3()}); MemoryStream ms = new MemoryStream(); XmlSerializer xml = new XmlSerializer(typeof(Message)); xml.Serialize(ms,xml); That will just work with XmlSerializer, but if I try to convert it to protobuf-net I will get a "No default encoding for Object" message. The best I came up with is to use generics and [ProtoInclude] as seen in this example. Since I can have different object types within the array this doesn't quite make it. I added a generic List for each potential type and a property with [ProtoIgnore] with type object[] to add them and get them. I have to use reflection when adding them (to know in which array to put each item) which is not desirable and I still can't preserve the ordering as I just extract all the items on each list one by one and put them into a new object[] array on the property get. I wonder if there is a way to accomplish this?

    Read the article

  • How do I store an array of a particular type into my settings file?

    - by joebeazelman
    For some reason, I can't seem to store an array of my class into the settings. Here's the code: var newLink = new Link(); Properties.Settings.Default.Links = new ArrayList(); Properties.Settings.Default.Links.Add(newLink); Properties.Settings.Default.Save(); In my Settings.Designer.cs I specified the field to be an array list: [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public global::System.Collections.ArrayList Links { get { return ((global::System.Collections.ArrayList)(this["Links"])); } set { this["Links"] = value; } } For some reason, it won't save any of the data even though the Link class is serializable and I've tested it.

    Read the article

  • Why array size in llvm created by llvm's internal tools is not good enough for llvm?

    - by vava
    I'm getting strange error message from the following code: ArrayType * arrayType = ArrayType::get(Type::getInt32Ty(ctx), 0); stack = builder->CreateAlloca(arrayType, ConstantArray::get(arrayType, std::vector<Constant *>())); This code compiles fine but it gives me llvm::Value* getAISize(llvm::LLVMContext&, llvm::Value*): Assertion `Amt-getType() == Type::getInt32Ty(Context) && "Malloc/Allocation array size is not a 32-bit integer!"' failed. during execution. What's interesting, is that I never said of what type array size should be, it was created somewhere inside helper functions. So I naturally see no good reason for assert to be there. But I must be doing something wrong anyway, can you help me spot the problem? Thanks in advance. PS. LLVM, as good as it is, lacking documentation tremendously.

    Read the article

  • How do I create an empty array/matrix in NumPy?

    - by Ben
    I'm sure I must be being very dumb, but I can't figure out how to use an array or matrix in the way that I would normally use a list. I.e., I want to create an empty array (or matrix) and then add one column (or row) to it at a time. At the moment the only way I can find to do this is like: mat = None for col in columns: if mat is None: mat = col else: mat = hstack((mat, col)) Whereas if it were a list, I'd do something like this: list = [] for item in data: list.append(item) Is there a way to use that kind of notation for NumPy arrays or matrices? (Or a better way -- I'm still pretty new to python!)

    Read the article

  • Iphone SDK array initlised at viewdidload and released at dealloc is causing leak?

    - by Skeep
    Hi All, I am new to iphone development and i am getting ready to submit my first app. I have an array initlised at viewdidload and released at dealloc, however it is causing a leak. My question is why would this happen is it because the dealloc has not run? My Code is as follows the NSMutableArray called listOfItems is referenced in my .h file for the UITableView. The listOfItems array is used to populate the table. In the viewDidLoad method i have this code listOfItems = [[NSMutableArray alloc] init]; and in the dealloc i have: [listOfItems release]; When watching leaks in instruments I get a NSArray leak which when I drill down points to the line: listOfItems = [[NSMutableArray alloc] init]; Am i putting all this in the right place? Thanks

    Read the article

  • How to deal with array of checkbox values in Ruby 1.9.1?

    - by Ola Tuvesson
    In Ruby 1.9.1 strings are no longer enumerable and string.each is undefined. This causes a problem when parsing values for an array of generated checkboxes since the field value, when submitted, becomes an array of strings. For example: <% for map in Map.find(:all) %> <%= check_box_tag "listing[map_ids][]", map.id %> <%= map.title %> <% end %> This will result in an error, undefined method `each' for "1":String, in process_parameter_filter because map_ids is being passed as "map_ids"=["1","2"] (if checkboxes with values 1 and 2 have been checked that is). What is the recommended way to fix this?

    Read the article

  • How do you link to an action that takes an array as a parameter (RedirectToAction and/or ActionLink)

    - by Andrew
    I have an action defined like so: public ActionResult Foo(int[] bar) { ... } Url's like this will work as expected: .../Controller/Foo?bar=1&bar=3&bar=5 I have another action that does some work and then redirects to the Foo action above for some computed values of bar. Is there a simple way of specifying the route values with RedirectToAction or ActionLink so that the url's get generated like the above example? These don't seem to work: return RedirectToAction("Foo", new { bar = new[] { 1, 3, 5 } }); return RedirectToAction("Foo", new[] { 1, 3, 5 }); <%= Html.ActionLink("Foo", "Foo", new { bar = new[] { 1, 3, 5 } }) %> <%= Html.ActionLink("Foo", "Foo", new[] { 1, 3, 5 }) %> However, for a single item in the array, these do work: return RedirectToAction("Foo", new { bar = 1 }); <%= Html.ActionLink("Foo", "Foo", new { bar = 1 }) %> When setting bar to an array, it redirects to the following: .../Controller/Foo?a=System.Int32[] Finally, this is with ASP.NET MVC 2 RC. Thanks.

    Read the article

  • How do I know an array of structures was allocated in the Large Object Heap (LOH) in .NET?

    - by AMissico
    After some experimenation using CLR Profiler, I found that: Node[,] n = new Node[100,23]; //'84,028 bytes, is not placed in LOH Node[,] n = new Node[100,24]; //'86,428 bytes, is public struct Node { public int Value; public Point Point; public Color Color; public bool Handled; public Object Tag; } During run-time, how do I know an array of structures (or any array) was allocated in the Large Object Heap (LOH)?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >