Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 491/799 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Is vSphere's Data Recover appliance 'production-ready'?

    - by Chopper3
    I have a smallish lab environment (16 x ESX4iU1 hosts and VC4U1) that I periodically want to backup. Normally in production we snap to secondary SAN boxes then have disk-based VTL backups via NetBackup which eventually migrate to off-site removable disks but this seems like an overkill for my own kit. I've spent a bit of time with vSphere's 'Data Recovery' appliance, it was easy enough to setup and I've not really ran into any issues with it but that doesn't mean I trust it fully. Have you had any experiences with it, positive or negative that would help me decide whether to trust it or pay Symantec for more licences? Thank you in advance.

    Read the article

  • VMWare Server lck file keeps coming back

    - by muncherelli
    I am running VMWare Server 2.0 on a Debian Lenny system as a host OS. I am getting this error when I try to start a Virtual Machine Cannot open the disk '/var/lib/vmware/Virtual Machines//.vmdk' or one of the snapshot disks it depends on. Reason: Failed to lock the file. So I looked around on the web and found that I need to delete the .lck folder and file in order to get this error This seems to happen any time I reboot my Debian Server. The Virtual Machines sometimes do not recover and this lck file is causing problems. Should I create a cron script that does a rm *.lck on each of my machines on reboot? Looking for any direction on how to resolve this. It seems when i do a "reboot" command it is maybe not gracefully shutting down the VMware containers so the lock files are still intact?

    Read the article

  • How can I disable logging in Tomcat 7?

    - by WilliamMayor
    I have a Tomcat 7 server running in a VM that has very little disk space (20G). Over the course of a few days Tomcat will fill the space with logging info (usually about 15G before it runs out). I've tried turning down the log level (from INFO to SEVERE) in the logging.properties file, I've also tried sending the log info to /dev/null. It doesn't seem to work as I still get a full log directory after no time at all. Can I put a file size limit on the log files? Is something overriding the properties I'm setting? Where can I find this information? My Google Fu just returns information about logging from within an application using JULI.

    Read the article

  • Identify SATA hard drive

    - by Rob Nicholson
    Very similar question to: Physically Identify the failed hard drive But for Windows 2003 this time. Scenario: Four identical SATA hard drives plugged into motherboard (no RAID controller here) Configured as single drive in Windows as a spanned volume One of them is starting to fail with error "The driver detected a controller error on \Device\Harddisk3" How do you cross-reference Harddisk3 to the physical SATA connection on the motherboard so you know which drive to replace? I know replacing this drive will trash the spanned array requiring it to be rebuilt anyway so my rough and ready solution is: Delete the spanned partition Create individual partitions on each drive labelled E: F: G: and H: and work out which one is Harddisk3 Power down, remove each disk one at a time, power-up until the drive letter disappears But this seems a rather crude method of identifying the drive. The SATA connectors will be numbered on the motherboard but I appreciate this might not cross-match to what Windows calls them. Thanks, Rob.

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • Troubleshooting a crash with Windows 7

    - by AngryHacker
    I have a folder with several thousand videos (All .MPG extensions). When I open the folder with these videos, it shows up fine, but as I start scrolling down, it crashes the Windows Explorer. In the Event Viewer, I see this: Faulting application name: Explorer.EXE, version: 6.1.7600.16450, time stamp: 0x4aebab8d Faulting module name: ntdll.dll, version: 6.1.7600.16559, time stamp: 0x4ba9b802 Exception code: 0xc0000374 Fault offset: 0x00000000000c6df2 Faulting process id: 0x954 Faulting application start time: 0x01cbb1b71edf3b51 Faulting application path: C:\Windows\Explorer.EXE Faulting module path: C:\Windows\SYSTEM32\ntdll.dll Report Id: ee987372-1dc4-11e0-8e06-406186ea9135 I suspect that one of the videos has bad metadata. I removed the Length column and it was still crashing. I then removed the Date column and the problem disappeared. How do I go about troubleshooting this problem or at least identifying the file that's causing the issue.

    Read the article

  • Installing linux mint in one partition

    - by sha404
    So, I have a disk with a MBR setup(image below). I've managed to have 50 GB unallocated space for intalling Linux Mint 14. And I want to keep the current windows OS too(but don't want the Mint inside windows). Now I've seen in some tutorials that Linux Mint needs several partitions for bootloader, swap, & home. I don't like to have so many partitions & maybe MBR stuff won't let me create more than one now. So, is it possible to install Linux Mint in one partition only? If it is really impossible than what's the minimum number? & how can I accomplish that? Thanks in advance.

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • How to mount remote samba share from local host with multiple groups?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 $ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has? Is there a way to: Mount the remote samba with multiple groups on the local system? Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs; but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. And the last solution would be NFS but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • How to access Windows Registry from DOS

    - by SEARAS
    How to access Windows Registry from DOS? I need to access registry from DOS, while I boot from DOS bootable disk. I've searched all the internet, and found only Offline NT Password and Registry Editor, which can not be used in DOS, as I understand. Also I've found RegView (from many mirrors), which isn't working too (I've tried many instructions). Is there any easy-in-usage tool, like reg.exe, which is able to load registry hives, so that I can change registry values?? Or any working instructions ?? Note: I already have a bootable drive, which can read/write to NTFS drives. Thanks in advance!

    Read the article

  • How can I split a stereo audio track of a movie into two separate audio tracks?

    - by pesche
    I often record TV shows with a hard disk recorder/DVD writer, burn them as VRO file and convert to MP4 with Handbrake. The shows are bilingual broadcasts with two mono audio channels instead of a stereo one: dubbed voice on the left, original voice on the right. The TV set and VLC are both perfectly capable to play only the left or the right channel, but other video players may just offer to select between different stereo audio tracks (like they are present on many DVDs). I'd like to have an easy process to create MP4 or MKV files of these shows where the two audio channels are split into two separate audio tracks. The only way that I know of is to extract the audio track (e.g. using MPEG Streamclip), split it into two tracks using an audio tool like Audacity and then merge the audio tracks back (using a DVD authoring software, don't remember all details). Clearly not a thing to repeat regularly. Preferably a solution should run on Mac OS X, but Linux or Windows solutions are very welcome, too.

    Read the article

  • Windows 7, files reappear after deletion.

    - by HeavyWave
    I'm trying to delete some files from a folder. I've taken ownership of the files and the folder. When I delete these files Windows doesn't report any errors and deletes them. BUT, after I press F5 these files reappear again. There are no messages whatsoever, they are just undeletable. I know login off will help, but how do I fix it without going through the pain of closing everything down? P.S. Files disappear from the folder after aprox. 5 minutes. Update. Turns out my version of Windows did not properly upgrade from test version, so it had some weird disk drive issues.

    Read the article

  • Backup SQL server db issue: delete old backup files

    - by David.Chu.ca
    I tried to use sqlmaint.exe tool to back up a database on a remote PC. Here is an example of backup: sqlmaint.exe -S remoteSQLServer\SQLInstance -U username -P pwdxxx -D myDB -BkUpMedia DISK -BkUpDB C:\MSSQL_Backups -DelBkUps 3days ... Here I specified to delete backups older than 3 days. However, the job seems not deleting old bak files on the remote PC(where the SQL server sits). The remote PC has Windows 2008 Server. I also set the C:\MSQL_Backups as shared network drive for EnyOne as owner. My understanding is that the job will delete any bak files older than 3 days. Not sure what I missed? By the way, the job runs at a box with SQL server 2005 installed.

    Read the article

  • Is it possible to change the default scan folder on an Eye-Fi Wireless SD Card

    - by MichaelPh
    By default Eye-Fi cards scan the DCIM folder (and subfolders) used by most digital cameras for new images to upload. Is there a way of changing this to a different folder? In my particular case I'm using a Kodak photo scanner (P461) that uses a PHOTO(N) folder format to store the scanned images, as far as I know the device has no configuration interface to alter this setting so that doesn't seem to be an option. This topic on the Eye-Fi forums is the closest I've come to a solution, but a perfunctory investigation of Disk Probe doesn't make it obvious what needs to be modified on the card.

    Read the article

  • PC wont boot with more than 1 stick of RAM.

    - by Aidan
    Hi Guys, I've got the following computer and I've just put in a new CPU QX9650 and I've run into this problem since making this hardware change. Whenever I put more than 1 of my 4 sticks of ram into my machine it wont load an OS. It'll go through the BIOS but BSOD on windows load. It also wont let me install an OS from disk or boot into Linux. I've ran memtest with all 4 sticks in and I get 10k+ errors on test5. Each stick of ram on it's own is fine and functions properly. I only have problems when all 4 sticks are in the machine at the same time. System specs.. CPU: QX9650 Mobo: Asus P5B 2104 BIOS RAM: 2xPC25400 DDR2 , 2xPC2 6400 both OCZ. Is the problem on my end or is the CPU faulty?

    Read the article

  • PC wont boot with more than 1 stick of RAM.

    - by Aidan
    Hi Guys, I've got the following computer and I've just put in a new CPU QX9650 and I've run into this problem since making this hardware change. Whenever I put more than 1 of my 4 sticks of ram into my machine it wont load an OS. It'll go through the BIOS but BSOD on windows load. It also wont let me install an OS from disk or boot into Linux. I've ran memtest with all 4 sticks in and I get 10k+ errors on test5. Each stick of ram on it's own is fine and functions properly. I only have problems when all 4 sticks are in the machine at the same time. System specs.. CPU: QX9650 Mobo: Asus P5B 2104 BIOS RAM: 2xPC25400 DDR2 , 2xPC2 6400 both OCZ. Is the problem on my end or is the CPU faulty?

    Read the article

  • Permissionless external drive with NTFS

    - by user12889
    I have an external hard disk which has 1 partition, formatted in NTFS. I use this drive on multiple computers with a different logins on different machines, Windows XP and Windows 7. All files are plain old files, not OS encrypted or compressed. Every now and then Windows 7 does not let me access some files, citing permission problems. I can circumvent this per case by taking ownership and setting appropriate permissions. This, however, is tedious. Is there a simple way to tell Windows to not enforce or store any permissions on any file/directory on a partition?

    Read the article

  • Network backup for Macs and PCs - formatting question

    - by neilfein
    I'm trying to use a LaCie 2TB drive as an AirPort drive, for backup on a home network. We have one mac and two PC laptops. My plan is to create a Mac partition and a Windows partition. However, Disk Utility won't let me set the windows partition to Windows format; there's no option in the menu for it in the partition tab. Am I doing something wrong? Alternatively, is there a way to partition the drive with one partition that all three machines can see? We have a Mac G5 with 10.4 and two laptops with Windows 7. Thanks!

    Read the article

  • Outlook 2010: using signatures stored on network

    - by Gregory MOUSSAT
    With Outlook before the 2010 version, it was possible to specify any path for the signatures. With Outlook 2010, the only way is to use those stored into C:\Documents and Setting\UserName\Local Settings\Application Datas\Microsoft\Signature\ I'd like to point the signatures to a network share. Allowing us to modify the signatures into the share, instead of login on every computers each time we are asked to modify them (and this is quite often because the signatures contain logos about current events). We currently use a script to copy the signatures from the share to the local disk when users login. Question: How to set Outlook 2010 to use signatures outside of the default signature folder ?

    Read the article

  • Switching to Linux, Need backup help

    - by Stephen
    I have an existing Vista installation on my thinkpad x200 and I want to install Linux on my machine. I've done this several times already but I usually format the whole disk and dual boot on Windows and Linux. Which means I have to reinstall and reconfigure everything I had on Windows. What I want to do is backup my windows installation (into an image) and start a clean Linux installation, and run the windows image thru Vmware or Virtualbox. Whats the easiest way to do this? any free tools available? I've seen acronis but dont wanna buy it for a 1 time session only.

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • How do I tell if my firewire connection is running as 400 or 800?

    - by Tom
    I have a MacBook Pro with FireWire 800 and a freecom external harddrive that has USB3, FireWire 400 and 800. I am using a Nikkai FireWire 800 cable that has 800 connector on one end and a 400 connector on the other end. The 800 connector is attached to the MacBook pro and the 400 connector is attached to the freecom drive. Is there any way to tell what connection has been established? I looked at disk utility and it simply said 'FireWire'. Is there a command-line tool that would give more information? If it's 400, I plan to swap the cable for 800 connector at both ends.

    Read the article

  • Disable or sleep secondary HDD in Macbook

    - by cpak
    I've done some quick Googling but didn't find an answer. I've put an SSD in my Macbook, and at the same time moved the original HDD to the optical drive bay. I'm running the OS and most of my daily apps off the SDD so the HDD is really just for storing stuff I need now and then. Now I'd like to disable (as in power off or "force sleep") the HDD when I don't need it. Tried unmounting the disk using diskutil unmountDisk but it kept spinning for like 10 minutes. Maybe that's to be expected, but I'd imagined it would stop instantly on unmount. Also, it would be nice to have it disabled by default, and only mount it (= power on) when I need it. Grateful for any input on this!

    Read the article

  • Proftpd user-auth with mod_sql/mod_sql_passwd

    - by Zae
    I'm reading up how to interface ProFTPd with MySQL for an implementation I'm working on, I noticed it seems like all the example code or instructions I see have the user login field in MySQL set as "varchar(30)". I don't see anything saying there's a limit to the field length for ProFTPd, but I wanted to check around anyway. The project this setup is going to get mixed into was planning to have their universal usernames support "varchar(255)". Can I use that safely? or is there an FTP limitation elsewhere I'm missing? Running ProFTPd 1.3.4a(custom compiled), MySQL 5.1.54(ubuntu repos)

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >