Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 571/1485 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • Intel ICH9/10R raid 5 drive failure

    - by davpen
    About a year ago I was using the native Intel ICH9R Raid 5 an Intel P35 based motherboard. The system was running Vista x64 and when one of the drives failed Vista blue screened on boot until I had figured out which drive had failed and removed it (a rather nerve racking hit and miss affair). The same thing happened some months later on another similar system so it wasn't a once off. This wasn't the robust raid 5 drive failure behavior that I would have hoped for and expected. I moved to Highpoint Rocketraid 2300 and haven't had any problems although I have yet to have a drive fail with this set up. But I am now looking to build a new system based on an i7 and Windows 7. At the moment Highpoint doesn't have drivers for Windows 7 so I am considering moving back to the on board Intel Raid. Yes I know I that I might get away with using the Vista drivers but I don't really want to take that chance with critical data. The question then is has anyone else experienced a drive failure with Intel raid and how did the OS and drivers handle it? Is it safe to go back?

    Read the article

  • Configuring SASL support in libmemcached

    - by John Keyes
    I'm trying to build libmemcached with SASL support on OS X Mountain Lion. I have built memcached (1.4.15) with SASL support: $ memcached -S -vv Initialized SASL. slab class 1: chunk size 96 perslab 10922 ... slab class 42: chunk size 1048576 perslab 1 <17 server listening (binary) <18 server listening (binary) <19 send buffer was 9216, now 3728270 <20 send buffer was 9216, now 3728270 <19 server listening (udp) <20 server listening (udp) ... I am trying to build libmemcached with SASL support too. I have tried the following: $ ./configure --prefix=/usr/local \ --with-memcached-sasl=/usr/local/bin/memcached ... $ ./configure --prefix=/usr/local \ --with-memcached-sasl="/usr/local/bin/memcached -S" ... But the resulting configuration summary is the same for both: Configuration summary for libmemcached version 1.0.11 * Installation prefix: /usr/local * System type: apple-darwin12.2.0 * Host CPU: x86_64 * C Compiler: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C Flags: -O2 -Werror -Wall -Wextra -std=c99 -Wbad-function-cast -Wmissing-prototypes -Wnested-externs -Woverride-init * C++ Compiler: i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C++ Flags: -O2 -Werror -Wall -Wextra -Wpragmas -D_FORTIFY_SOURCE=2 -Waddress -Wchar-subscripts -Wcomment -Wctor-dtor-privacy -Wfloat-equal -Wformat=2 -Wmissing-field-initializers -Wmissing-noreturn -Wnon-virtual-dtor -Wnormalized=id -Woverloaded-virtual -Wpointer-arith -Wredundant-decls -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wstrict-overflow=1 -Wswitch-enum -Wundef -Wunused-variable -Wwrite-strings -fwrapv -ggdb * CPP Flags: -I/usr/local/include * Assertions enabled: no * Debug enabled: no * Warnings as failure: no * SASL support: Am I doing something incorrectly? Thanks.

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • What to do when a device has no driver for Windows 7 but it has Vista, XP drivers

    - by Mehper C. Palavuzlar
    This has always been a bothersome matter for me. Some devices (printers, scanners, etc.) have drivers for older versions of Windows (Vista, XP, 2000, NT) but no driver for Windows 7. What are my chances to install such devices on Windows 7? Example case: I have a Sharp printer & scanner (Sharp AR-122E N) which I have used for my old Windows XP based PC. Now I want to install it on a Windows 7 x64 based PC. Windows 7 cannot load its driver. I used the original driver CD but when I run the setup.exe (which is included in AR122EN111.exe, 6713KB), it says Cannot install driver on this operating system. Supported operating systems are: Windows 2000, XP, Vista. I tried to install the driver using compatibility settings. I tried Windows Vista and Windows XP SP3, but to no avail. The setup gave the same error. I also googled for Windows 7 driver for "Sharp AR-122E N" but it only listed the original driver that I tried. The official site of Sharp does not even list the driver for this product. In the past, the compatibility setting workaround did work for some devices, but this time it failed. What else can I do to overcome this problem?

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Windows 8 x64 with VMWare Workstation or inside ESXi

    - by Dommer
    I need to run several virtual machines on a core i7-920 box with 12GB or RAM and a 256GB SSD to host the VMs. It also has a Highpoint RocketRaid 2720SGL RAID controller with a 12TB RAID 5 array. I want one of my VMs to run Windows 8 x64, to have access to the RAID array as a native disk (not as networked drives and it needs to run at full speed) and to be able to send files quickly across the network. Initially I thought I'd try to do this using ESXi 5, but I have been unable to find any working RAID drivers for the RR2720SGL and it is not on the HCL for ESXi 5. In light of this, I have installed Windows 8 x64 on the hardware and am thinking of installing VMWare Workstation and running my VMs inside there. I guess my questions are these: How does VMWare Workstation 9 perform compared to ESXi 5? In the real world I mean? Presumably installing Win 8 as the host OS will give me way better performance for that Win 8 machine than Win 8 running under ESXi? I should stick with Windows 8 x64 as the host OS, right? If I install a domain controller VM inside my Win 8 box and join the Win 8 machine to that domain, am I insane (I would guess the Win 8 machine wouldn't see the domain controller until it finished starting everything up, but I don't think that matters)?! is it feasible to give metrics like this and if so, what is the likely value of x? 25%? 50%? 75%? Win 8 under ESXi runs x% as fast as Win 8 installed bare metal.

    Read the article

  • How to properly remove disk from PERC 6/i RAID controller ?

    - by Stefano Borini
    I have a Dell T710, coming with PERC 6/i RAID controller. The current raid has 2x500 GB hard drives (with the OS), and 6x1000 GB hard drives (in RAID-6, currently empty). I would like to take one 1000 GB disk physically out to keep as an immediate spare in case of a crash, and configure the remaining 5x1000 GB in a single VD RAID-6. This is all nice and clean and works, until I realized that the display on the machine reports the lack of the 8th disk as an error. It's marked as error, but appears to be a warning, since the machine is fully functional. My question is: what is the best way to keep one disk as a spare out of the array? should I disassemble the disk from the cradle and insert the empty cradle in the array ? Or should I just silence the error in the display in some way (how?). I know that what I am doing sounds pretty strange, but here is academia and having a spare disk available could take weeks. Better to have one ready in my drawer for any emergency.

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • Laptop authentication/logon via accelometer tilt, flip, and twist

    - by wonsungi
    Looking for another application/technology: A number of years ago, I read about a novel way to authenticate and log on to a laptop. The user simply had to hold the laptop in the air and execute a simple series of tilts and flips to the laptop. By logging accelerometer data, this creates a unique signature for the user. Even if an attacker watched and repeated the exact same motions, the attacker could not replicate the user's movements closely enough. I am looking for information about this technology again, but I can't find anything. It may have been an actual feature on a laptop, or it may have just been a research project. I think I read about it in a magazine like Wired. Does anyone have more information about authentication via unique accelerometer signatures? Here are the closest articles I have been able to find: Knock-based commands for your Linux laptop Shake Well Before Use: Authentication Based on Accelerometer Data[PDF] Inferring Identity using Accelerometers in Television Remote Controls User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer Identifying Users of Portable Devices from Gait Pattern with Accelerometers[PDF] 3D Signature Biometrics Using Curvature Moments[PDF] MoViSign: A novel authentication mechanism using mobile virtual signatures

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Move every 3 rows into a column in excel

    - by Eliane El Asmr
    Please i need your help. I need to move every 3 rows into a new colomn. --Let's suppose i have this: Ambassade de France S.E. M. Patrice PAOLI 01-420000-420150 Ambassade de France Mme. Jamilé Anan 01-420000-420150 Ambassade de France Mme . Marie Maamari 01-420000-420150 --I need them to be Like this: Ambassade de France S.E. M. Patrice PAOLI 01-420000-420150 Ambassade de France Mme. Jamilé Anan 01-420000-420150 Ambassade de France Mme . Marie Maamari 01-420000-420150 I have this code. Can you help me Please. It's giving me error. Out of range. What should i change? It's urgent:(the code is for every 7, i need for every 3) Sub Every7() Dim i As Integer, j As Integer, cl As Range Dim myarray(100, 6) As Integer 'I don't know what your data is. Mine is integer data 'Change 100 to however many rows you have in your original data, divided by seven, round up 'remember arrays start at zero, so 6 really is 7 If MsgBox("Is your entire data selected?", vbYesNo, "Data selected?") <> vbYes Then MsgBox ("First select all your data") End If 'Read data into array For Each cl In Selection.Cells Debug.Print cl.Value myarray(i, j) = cl.Value If j = 6 Then i = i + 1 j = 0 Else j = j + 1 End If Next 'Now paste the array for your data into a new worksheet Worksheets.Add Range(Cells(1, 1), Cells(101, 7)) = myarray End Sub Thank you.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >