Search Results

Search found 59449 results on 2378 pages for 'disk error'.

Page 181/2378 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • SQL Server: One 12-drive RAID-10 array or 2 arrays of 8-drives and 4-drives

    - by ben
    Setting up a box for SQL Server 2008, which would give the best performance (heavy OLTP)? The more drives in a RAID-10 array the better performance, but will losing 4 drives to dedicate them to the transaction logs give us more performance. 12-drives in RAID-10 plus one hot spare. OR 8-drives in RAID-10 for database and 4-drives RAID-10 for transaction logs plus 2 hot spares (one for each array). We have 14-drive slots to work with and it's an older PowerVault that doesn't support global hot spares.

    Read the article

  • Simple Workstation Imaging Solution?

    - by Will
    Hey guys, I need a fairly cheap imaging solution for Windows XP corporate desktops. Ideally, I'd be able to set up a desktop exactly as we want it, create an image, deploy this image to a server, then boot a new desktop to a CD/USB Drive/Network and quickly set up the workstation. Ideally, each computer would also have a unique workstation name. Any ideas? Right now I'm using a custom built Linux DD solution, but it's slow, not network-based, can't image multiple computers at the same time as there's only one copy on a USB drive, and can't uniquely name the computers. Thanks, Will

    Read the article

  • Stack Overflow Error

    - by dylanisawesome1
    I recently created a recursive cave algorithm, and would like to have more extensive caves, but get a stack overflow after re-cursing a couple times. Any advice? Here's my code: for(int i=0;i<100;i++) { int rand = new Random().nextInt(100); if(rand<=20) { if(curtile.bounds.y-40>500+new Random().nextInt(20)) digDirection(Direction.UP); } if(rand<=40 && rand>20) { if(curtile.bounds.y+40<m.height) digDirection(Direction.DOWN); } if(rand<=60 && rand>40) { if(curtile.bounds.x-40>0) digDirection(Direction.LEFT); } if(rand<=80 && rand>60) { if(curtile.bounds.x+40<m.width) digDirection(Direction.RIGHT); } } } public void digDirection(Direction d) { if(new Random().nextInt(100)<=10) { new Miner(curtile, map); // try { // Thread.sleep(2); // } catch (InterruptedException e) { // // TODO Auto-generated catch block // e.printStackTrace(); // } //Tried this to avoid stack overflow. Didn't work. }

    Read the article

  • Ghost Image - windows asks for activation on when deployed to VM

    - by Chris Sobolewski
    I have several images created with Ghost Solution Suite (v11 I believe), the images have been in use for a few years now, but I am finally to the point where I have enough time to attempt to virtualize them for easier updates. I am running VMWare and attempting to image the virtual machines with my ghost image files. For my images I am running sysprep with minisetup and using reseal. The image deploys successfully, however when I start the VM for the first time, it demands windows activation. This doesn't happen when I image a physical computer, even a different model with different hardware. The idea of virtualizing my images becomes rather worthless if I am unable to deploy the images without having to activate every time (especially as Microsoft keeps declaring our volume licence key as invalid for activations). Does anyone know why it is asking for activation on a virtual machine, but not a physical PC? How can I prevent this?

    Read the article

  • "Fails to get size of gamma" error when trying to set resolution

    - by Max Payne
    On 11.10 my max allowed resolution is 1024x768, while my monitor supports 1280x800 on windows. I've seen a method to solve this via xrandr, but I allways get a message saying it fails to get size of gamma. xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 61.0* 800x600 61.0 640x480 60.0 Is there any other way to add 1280x800 resolution to my laptop, any workarounds this? Thanks in advance.

    Read the article

  • Error setting up Data Protection Manager 2010 Agents / Network "Unauthenticated" in network settings

    - by Bowsa
    I'm not sure if the two are connected but i suspect they are. Basically I'm tring to setup Data Protection Manager 2010 on a fresh install of Server 2008 R2 in a SBS 2003 domain. Everything went fine until trying to install agents across the network. Upon clicking add, i get the following error message: Unable to connect to the Active Directory Domain Services Database. Make sure that the DPM server is a member of a domain and that the controller is running. Also verify that there is network connectivity between the DPM server and the domain controller. ID: 7 As usual (worryingly) the MSDN support for 2010 products is nearly non existant, clicking the error ID simply gives a page not found error. So after 2 days of Googling and trying various fixes (DNS settings, adding permissions to GPO objects, rejoining the domain and many more) I thought I'd ask here in the hope that someone out there may have had this issue before. Any help greatly appreciated!

    Read the article

  • How to turn on SMTP relaying? Getting error: "SMTP Error: The following recipients failed [address]"

    - by Susan
    Attempting to configure some software on our server to send email via SMTP. When testing it, get this SMTP Error: SMTP Error: The following recipients failed [address]" Based on my research and a reply from the software's support team this error usually occurs because relaying is not allowed on the SMTP server from the IP address of the web server. The solution is to "Go to the configuration of your SMTP server and turn relaying on for your IP address" My question is -- how do I do this? Is this done from WHM? cPanel?

    Read the article

  • A little tidbit on Team Build 2010 and error MSB3147

    - by Enrique Lima
    The problem? Performing a build on a ClickOnce solution would not be successful due to the setup.bin not being located. Ok, now what? Researched from corner to corner, install, re-install, update.  Found some interesting posts to fix the issue, but most of them were focused on Team Foundation Server/Team Build 2008, and some other on 2005.  The other interesting tidbit was the frequent indication to modify the registry to help Team Build find the bootstrapper. Background info:  This was a migration I posted about a few days ago, a 32 bit TFS implementation to a full 64 bit TFS implementation.  Now, the project has binaries and dependencies on X86 (This piece of information became essential to moving from a failed build to a successful build). So, what’s the fix? The trick in this case was to go back into the Build Type and check the properties/configuration.  Upon further investigation, I found the following:  Once you Edit the Build Definition, then select Process, expand 3. Advanced and look for MSBuild Platform, switch from Auto to X86.  Ran the Build, and success!

    Read the article

  • linux build iso error

    - by Neil
    i played with linux customization and when i want to build the iso back i get this error: $ mkisofs -r -o rhel.iso -b isolinux/isolinux.bin -c isolinux/boot.cat ./ INFO: UTF-8 character encoding detected by locale settings. Assuming UTF-8 encoded filenames on source filesystem, use -input-charset to override. Unknown file type (unallocated) ./.. - ignoring and continuing. Using RELEA000.HTM;1 for /RELEASE-NOTES-pt_BR.html (RELEASE-NOTES-U1-pt_BR.html) Size of boot image is 20 sectors -> mkisofs: Error - boot image './isolinux/isolinux.bin' has not an allowable size. i didnt change the isolinux.bin why i ahve this error?

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • Windows XP Pro SP3 stop error code 0x000000F4

    - by Andrew Ford
    I have a system running Windows XP Pro SP3. My wife is the primary user of this system, and the last several days in a row, when she tries to log in to the system it gives her a BSOD with the stop error code 0x000000F4. The rest of the error codes change every time she tries to log on, but that one seems to stay constant. The error it gives me is: A process or thread crucial to system operation has unexpectedly exited or been terminated. The machine will also get stuck in a boot loop, where it keeps giving me the screen saying that windows did not start successfully, and to choose how to start windows. I have tried Safe Mode, Last known Good Config, and Normally, and all have resulted in either a return to the boot loop, or BSOD. What might this mean? How can I fix it, without doing a clean install?

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • Can't access USB drive anymore

    - by marie
    I have a 32 GB Lacie Cookey USB flash disk that doesn't show in the Computer window but it's visible as a device. cmd > diskpart DISKPART> list disk Disk ### Status Size -------- ------------- ------ Disk 0 Online 149 G Disk 1 No Media 0 DISKPART> select disk 1 Disk 1 is now the selected disk. DISKPART> clean Virtual Disk Service error: There is no media in the device. It also appears in the Disk Management tool, but the box is empty. Is there anything I can do or is it dead? ............................................................ output from ChipGenius: Description: [F:]USB Mass Storage Device(LaCie CooKey) Device Type: Mass Storage Device Protocal Version: USB 2.00 Current Speed: High Speed Max Current: 200mA USB Device ID: VID = 059F PID = 103B Serial Number: 070535924B170C18 Device Vendor: LaCie Device Name: CooKey Device Revision: 0100 Manufacturer: LaCie Product Model: CooKey Product Revision: PMAP Controller Vendor: Phison Controller Part-Number: PS2251-67(PS2267) - F/W 06.08.53 [2012-09-26] Flash ID code: 983AA892 - Toshiba [TLC] Tools on web: http://dl.mydigit.net/special/up/phison.html

    Read the article

  • STOP: c000021a {Fatal System Error} The initial session process or system process terminated unexpectedly

    - by christof
    I'm encountering such an error after expanding disk space on a virtual machine using Hyper-V. STOP: c000021a {Fatal System Error} The initial session process or system process terminated unexpectedly with a status of (0x00000000) (0xc000012d 0x001003f0). The virtual server there is Windows Server 2008 R2 Enterprise Edition, which is also Domain Controller. I've tried to repair Windows but there is no restore point, and using the command line. I've tried the sfc /SCANNOW /OFFBOOTDIR /OFFWINDIR command, but I got the error Windows Resource Protection could not perform the requested operation.

    Read the article

  • Problems Running Cherokee Web Server Admin - config_reader.c:249 - Parsing error

    - by Sebastian
    I'm running Cherokee web server 0.99.30 on (Ubuntu Hardy) and I have been having some issues getting the admin to run property. When I run sudo cherokee-admin -b Login: User: admin One-time Password: {password} Web Interface: URL: http://localhost:9090/ [20/11/2009 22:57:29.733] (error) config_reader.c:249 - Parsing error Cherokee Web Server 0.99.30 (Nov 20 2009): Listening on port ALL:9090, TLS disabled, IPv6 disabled, using epoll, 4096 fds system limit, max. 2041 connections, caching I/O, single thread When I go to the admin page I get a 503 Service Unavailable error page. Any idea about how I could fix this? Thanks

    Read the article

  • 403 forbidden error when I attempt to install an ssh server

    - by vino suryono
    I have a problem when I try installing ssh-server on ubuntu 14.04 lts. What I've done: sudo apt-get update == succeed. sudo apt-get upgrade == succeed. sudo apt-get install ssh == failed. Notification that I got: Err http://archive.ubuntu.com/ubuntu/ trusty-update/main openssh-sftp-server i386 1:6.6p1-2buntu2 403 forbidden E: failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/o/openssh/openssh-sftp-server_6.6p1-2ubuntu2_i386.deb 403 forbidden E: Unable to fetch some archive, maybu run apt-get update or try with --fix-missing ?

    Read the article

  • VPN Error 868 when connecting even if using IP address

    - by Fr33dan
    I am trying to connect to a public VPN from VPNGate. However when I attempt to connect to a VPN from the list using MS-SSTP protocol I get the following error: Error 868: The remote connection was not made because the name of the remote access server did not resolve. If I open a command prompt and ping the address in question it resolves to the IP shown on the listing. If I configure the VPN using that IP address directly I still receive the error even though the name no longer needs to resolve. This was working yesterday but it seems the VPN I was using has been removed from the list. What is happening and how can I fix it?

    Read the article

  • Could not log-in properly but shows no error in joomla

    - by saeha
    This is what I did, I added variables in \libraries\joomla\database\table\user.php: var $img_content= null; //contains the blob type data var $img_name = null; var $img_type = null; then I added this code in \components\com_user\controller.php: $file = JRequest::getVar( 'pic', '', 'files', 'array' ); if(isset($file['name'])) { jimport('joomla.filesystem.file'); $fileName = $file['name']; $tmpName = $file['tmp_name']; $fileSize = $file['size']; $fileType = $file['type']; $fp = fopen($tmpName, 'r'); $content = fread($fp, filesize($tmpName)); //$content = addslashes($content); fclose($fp); $user->set('img_name', $fileName); $user->set('img_type', $fileType); $user->set('img_content', $content); } that works fine, but I found this problem in logging in with the new user with an uploaded photo, other user with an empty img_content field could login properly. What happens is when I log-in using the user with uploaded photo, it's not redirecting properly it just return to log-in, but when i log-in through backend using other user which is super admin, i could see that user which appears as logged in. I started saving the images in the database because I am having problem with the directory when I have uploaded the site. I think the log-in was affected by the blob type data in the database. Could that be the problem? What could be the solution? -saeha

    Read the article

  • NVIDIA 560 TI driver install ubuntu 14.04 leads to "missing on display" error

    - by allthosemiles
    Currently on my Ubuntu 14.04 install, I boot it up install the latest updates and attempt to install the NVIDIA drivers from xorg-edgers while following the top answer on this post Installing Nvidia Drivers It installs 304 for my card and when I check the "glxinfo | grep OpenGL" I have about 8 lines that read Xlib: extension "NV-GLX" missing on display ":0". I did the "sudo apt-get install nvidia-current" to get the latest. I install it but didn't see any errors in the terminal. I'm not sure what I'm doing wrong here. I'm not a complete beginner with Linux so I can find my way around just fine but I can't find a solution to this issue. Should I have done one of these two instead? sudo apt-get install nvidia-304 sudo apt-get install nvidia-graphics-drivers-304

    Read the article

  • Error # 1045 - Cannot Log in to MySQL server -> phpmyadmin

    - by SilverLight
    We have installed PHPMyAdmin on a windows machine running IIS 7.0. We are able to connect to MySQL using command-line, But we are not able to connect using PHPMyAdmin. The error displayed is: Error #1045 Cannot log in to the MySQL server. Can somebody please help? PHP Version 5.4.0 mysqlnd 5.0.10 - 20111026 - $Revision: 323634 $ phpMyAdmin-3.5.4-rc1-all-languages.7z EDIT : I followed the link below with no success, mean i changed that password but phpmyadmin still has that error... C.5.4.1.1. Resetting the Root Password: Windows Systems Thanks.

    Read the article

  • Error message downloading documents

    - by Scarlet Riley
    Hi, I hope someone can help me. I have a 3 yr old MacBook Pro and lately I've been getting an error msg when I try to download something to my machine, This is what it says: Download error /Users/scarletriley/Downloads/the name of document goes here- no spaces of course! could not be opened, because an unknown error occurred. Try saving to a disk first and then opening the file. What does this mean, and what should I do to fix it? Thanks! Scarlet

    Read the article

  • Wine installation error Ubuntu 13.04 64 bit

    - by Ocean
    Wine fails to install correctly as there apparently are unresolved dependencies. Tried through the software center and terminal. Same problem. And I can't uninstall what's installed either. I've followed steps I've found in other threads here, but they don't seem to remedy the problem. I still can't install the package completely, yet it appears in the menu, nor can I uninstall it, because the package is reported as not installed. If anyone knows what to do, I'd appreciate any help. Thanks. :)

    Read the article

  • Problem with testsaslauthd and kerberos5 ("saslauthd internal error")

    - by danorton
    The error message “saslauthd internal error” seems like a catch-all for saslauthd, so I’m not sure if it’s a red herring, but here’s the brief description of my problem: This Kerberos command works fine: $ echo getprivs | kadmin -p username -w password Authenticating as principal username with password. kadmin: getprivs current privileges: GET ADD MODIFY DELETE But this SASL test command fails: $ testsaslauthd -u username -p password 0: NO "authentication failed" saslauthd works fine with "-a sasldb", but the above is with "-a kerberos5" This is the most detail I seem to be able to get from saslauthd: saslauthd[]: auth_krb5: krb5_get_init_creds_password: -1765328353 saslauthd[]: do_auth : auth failure: [user=username] [service=imap] [realm=] [mech=kerberos5] [reason=saslauthd internal error] Kerberos seems happy: krb5kdc[](info): AS_REQ (4 etypes {18 17 16 23}) 127.0.0.1: ISSUE: authtime 1298779891, etypes {rep=18 tkt=18 ses=18}, username at REALM for krbtgt/DOMAIN at REALM I’m running Ubuntu 10.04 (lucid) with the latest updates, namely: Kerberos 5 release 1.8.1 saslauthd 2.1.23 Thanks for any clues.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >