Search Results

Search found 62712 results on 2509 pages for 'memory error'.

Page 776/2509 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • Installing VMware ESXi 4.0 on Dell 1950: Cannot open vmkboot.gz

    - by rlandster
    I am trying to install VMware ESXi 4.0.0 U1 on a Dell 1950 server via a bootable CD-ROM. I keep getting this error right at the start: Cannot open vmkboot.gz I checked that the CD-ROM drive is not to blame by installing Debian Etch using that drive. I tried several different versions of ESXi (3.5, 3.5 Dell edition, 4.0 Dell edition) and they all give me an error at the same place. I also tried installing from a USB "thumb" drive but got the same error. I checked with the VMware HCL (Hardware Compatibility List) and the Dell 1950 is listed as being compatible. Here are some server details: Two 1.6 GHz Xeon 5110 CPUs (ID: 06-0F-B) BIOS version 2.2.6 Any ideas on what might be the issue?

    Read the article

  • FTP Issue when connecting to a debian machine from windows

    - by erin c
    I have a .net application which copies bunch of files to a specific FTP folder on a debian machine on periodical basis, ftp folder has 755 mod, owner of the directory is the ftp username that I authenticate in .net application. So far I tested this application with bunch of debian boxes, my initial attempts generally fail with following message if I try it with a debian machine that I haven't tried it before: "remote server returned an error 550 File unavailable" When I see this error, I log onto another debian machine on my network, and I try FTP'ing the debian box that returns the error message from command line. I generally "put" a very small file to the folder in question, right after that windows application starts copying files successfully via FTP. It is as if my command line ftp operation fixes the problem and makes debian compatible with my .net application. I checked permissions before and after the problem, it doesn't look like what I did changed anything, I am at loss understanding why this problem occurs and why it is fixed with my silly hack. Can anybody tell me where to look at next to fix this extremely annoying issue?

    Read the article

  • Lightest Linux Desktop supporting Firefox/graphic browser

    - by Susan Mayer
    I am on Windows and I have a remote server with Ubuntu 10.10. I want to use Firefox or other graphic browser on that remote server. The problem is, the server's memory is only 512MB, so I can install larger desktop environment. I used to use XFCE and NoMachine NX, but they consume too much memory on that Ubuntu server. The only thing I want to use is a graphic browser (for example firefox) on that server. Nothing else. Do you have any good suggestions? Thanks a lot!

    Read the article

  • What kind of server hardware is roughly necessary to serve website to 10k users?

    - by jcmoney
    I've been looking at VPS's and the specs they offer for entry level setups seems somewhat surprising to me. I'm am new to this topic but many of VPS offer less than 512MB of memory and my laptop has 4GB of memory so I am curious what does it actually take in terms of hardware to serve say 10k users (say 5k daily active users)? I figure a large number of factors can probably sway this a lot but just for benchmarking, say the site is a social networking site written in php using mysql + apache that's not really doing anything unusual like serving lots of media. So essentially a very basic Facebook minus the absurd number of photos and videos. What about 100k users (50k daily active)? 1 million (500k daily active)? Thanks in advance.

    Read the article

  • Unable to delete a directory from NTFS drive: "Access is deined"

    - by Evgeny
    I'm running Windows XP Pro x64 SP2. I have a directory on an NTFS drive that was created by a Maven build. A subsequent build attempted to delete this directory and failed. I now get the error "Access is denied" whenever I try to do anything with that directory: change to it, delete it, rename it. This happens both in Windows Explorer and from a command prompt. The properties dialog in Windows Explorer doesn't even contain the Security tab. I created the directory, so I don't think this is truly a permissions issue. I've occasionally had this error happen in the past is well. I believe the error is misleading, but the question is: what is the real problem and how do I fix it?

    Read the article

  • Program can't start because GX6050R.dll is missing

    - by Robert P.
    I'm trying to install a program on a Windows 7 machine, but in the end of the installation process, when I press "Finish", I get an error message saying: Actrix.exe - System Error The program can't start because GX6050R.dll is missing from your computer. Try reinstalling the program to fix this problem. I have tried reinstalling the program, restarting the computer (before reinstalling) etc., but I can't get this to work. The sites I find when I Google this suggest I download some "Error repair tool" (not happening). Any clues as to how I can fix this?

    Read the article

  • Installing Plone on Centos fails: Unable to find libssl or openssl/ssl.h.

    - by paskster
    My dedicated Server has CentOS 5.5. I tried to install Plone, so I basically did: wget launchpad.net/plone/4.0/4.0.2/+download/Plone-4.0.2-UnifiedInstaller.tgz tar xzf Plone-4.0.2-UnifiedInstaller.tgz cd Plone-4.0.2-UnifiedInstaller ./install.sh zeo I ran into the following error: Unable to find libssl or openssl/ssl.h. If you wish to build without SSL support, run install.sh again with --without-ssl flag. Otherwise, install your platform's openssl-dev libraries and headers and try again. After this error I successfully installed openssl: yum install openssl And i tried to install Plone again. But I keep getting the error: "Unable to find libssl or openssl/ssl.h". Anybody an idea what I'missing?

    Read the article

  • SQL Server 7 Transaction Logs Issues

    - by nate
    Over the week my database server transaction log was full. With our app people could select from the database but could not update or insert into the database. In the past we have just truncated the transaction logs. After that, everything was back to normal. This week I truncated the transaction logs, and shrink that database. Now we can select, update, and insert into the database. The only issue is when we do a big job, and to a lot on inserting or updating, we get the following error: Database error: S1008:[Microsft][ODBC]Operation canceled We never had this issue before, I am assuming the that is the same as a timeout error. Has anyone else had this issue before, or know how I resolve this?

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • Makefile fails to install file correctly, installing HPL

    - by zarose
    I started installing HPL a while ago, and had a related question. I've been following along with this guide from Intel. I figure this warrants a whole new one. When I try to make the archive, the output seems fine until the end, where it gives an error. make[2]: Entering directory `/hpl-2.0/src/auxil/intel64' Makefile:47: Make.inc: No such file or directory make[2]: *** No rule to make target `Make.inc'. Stop. make[2]: Leaving directory `/hpl-2.0/src/auxil/intel64' make[1]: *** [build_src] Error 2 make[1]: Leaving directory `/hpl-2.0' make: *** [build] Error 2 Going to the directory /hpl-2.0/src/auxil/intel64 shows a file, "Make.inc", but it's highlighted red, and the white text blinks. Is there a way to manually make that file? What do I need to do to get the makefile to do this for me?

    Read the article

  • How to find which tab a particular Chrome process refers to

    - by George
    I often have 6 or 7 separate Chrome windows open, often with 5-10 tabs in each. When I look at Windows Task Manager, I see each chrome.exe process, with some using a large amount of memory. How can I find which particular tab the process refers to? I want to know which one uses the most memory and close that tab instead of having to close every Chrome window. Is there any way to get this information? This is on Windows Vista, but it is the same on other versions of Windows as well.

    Read the article

  • Problems serving SVN over HTTPS on Ubuntu 10.04

    - by odd parity
    We've been experiencing some problems with our Subversion server after upgrading to Ubuntu 10.04. When trying to access a repository, regardless of client (I've tried git-svn and svn on Windows as well as svn on Ubuntu 10.04, from different computers and network locations), I get a 400 bad request. Here's the output from svn: svn: Server sent unexpected return value (400 Bad Request) in response to OPTIONS request for 'https://svn.example.org/svn/programs' Here are the relevant entries from the Apache logs (I'm running Apache 2.2): error.log [Mon Jun 14 11:29:31 2010] [error] [client x.x.x.x] request failed: error reading the headers ssl_access.log x.x.x.x - - [14/Jun/2010:11:29:28 +0200] "OPTIONS /svn/programs HTTP/1.1" 401 2643 "-" "SVN/1.6.6 (r40053) neon/0.29.0" x.x.x.x - - [14/Jun/2010:11:29:31 +0200] "ction-set/></D:options>OPTIONS /svn/programs HTTP/1.1" 400 644 "-" "SVN/1.6.6 (r40053) neon/0.29.0" If anyone has run into similar problems or could give me a pointer to track down the cause of this I'd be very grateful - I'd really like to avoid having to downgrade the box again.

    Read the article

  • Zimbra ZCS 7.2.1 MTA Deferring e-mail

    - by user139181
    Zimbra 7.1.2 and the MTA seems to be deferring e-mail when it is received. Oct 1 09:35:42 www postfix/error[16614]: 5FB8C1A803EE: to=<[email protected]>, relay=none, delay=0.15, delays=0.08/0.01/0/0.06, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to thedigiologygroup.org[75.149.56.27]:7025: Connection timed out) I can telnet to both 25 and 7025. I do get a $ telnet thedigiologygroup.org 25 Trying 75.149.56.27... Connected to thedigiologygroup.org. Escape character is '^]'. 220 thedigiologygroup.org ESMTP Postfix 500 5.5.2 Error: bad syntax 500 5.5.2 Error: bad syntax` I dont see email in the inbox obviously and I am not sure how to troubleshoot what is going on. Nothing DNS has changed. This box has been running for a year Zimbra was removed and re-installed after trying to upgrade to ZCS-8 with no luck.

    Read the article

  • Windows Malicious Software Removal Tool log says it can't do all required actions. Should I be conce

    - by Tom
    Here's what the log file c:/Windows/debug/mrt.log of my Windows 7 install says: WARNING: Security policy doesn't allow for all actions MSRT may require. ->Scan ERROR: resource process://pid:6080 (code 0x00000005 (5)) ->Scan ERROR: resource process://pid:5300 (code 0x00000057 (87)) ->Scan ERROR: resource process://pid:3512 (code 0x00000057 (87)) I use the default setup. I didn't change anything. This is the first time I checked the log file and this warning is in there from the start. Can I do something about it? Or I shouldn't be concerned, because it can do everything what's necessary anyway? Do you have this warning in your logfile?

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • PHP Kohana CentOS 5

    - by Undefined
    Trying to deploy a Kohana based project in CentOS 5. Installed PHP 5.3.1 but still getting the following error. Warning: preg_match() [function.preg-match]: Compilation failed: this version of PCRE is not compiled with PCRE_UTF8 support at offset 0 in /usr/local/apache2/htdocs/icarus/system/core/utf8.php on line 30 Fatal error: PCRE has not been compiled with UTF-8 support. See PCRE Pattern Modifiers for more information. This application cannot be run without UTF-8 support. in /usr/local/apache2/htdocs/icarus/system/core/utf8.php on line 38 Trying since last 2 days, i upgraded my PHP from 5.1 to 5.3 but still getting the same error.The problem as per me is that the PCRE module of PHP in phpinfo() says is of sep 2004. Below is the actual line PCRE Library Version 5.0 13-Sep-2004 Can anyone tell me how to upgrade it or wats the solution to the problem. Thanks.

    Read the article

  • Sending mail to local address crashes web server (sendmail)

    - by deceze
    When trying to send mail automatically from a script at example.com via PHP's mail() to [email protected], the Apache server throws an Internal Error. I believe internally it is configured to use sendmail. The message gets dropped into ~/dead.letter and the general error log reads: [Wed May 12 11:26:45 2010] [error] [client xxx.xxx.xxx.xxx] malformed header from script. Bad header=/home/example/dead.letter... S: /home/example/www/test.php Trying any other address, not @example.com, works just fine. I have googled and serverfaulted for solutions, but they all require to edit configuration files in /etc/mail and similar system places, which is not an option, since this problem occurs on a shared host in which I only have access to ~/. Does anyone have a suggestion?

    Read the article

  • Who to run Marcomedia Projector executables on Windows 7?

    - by shinjin
    When I try to use an old app created using Macromedia Projector in Windows 7, it crashes after the first few screens. The same programs works fine on XP. I receive this error message after a few screens: Error A Fatal Error has occurred. Click OK to Quit. Pressing OK brings me a fresh one: Microsoft Visual C++ runtime Library This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. And finally I get a Macromedia Projector has stopped working message. I have already tried adjusting compatibility mode, or adding this program to the Data Execution Prevention exceptions, but none helped.

    Read the article

  • Running Sonatype Nexus in Tomcat 7.0, Tomcat blocking PUT requests

    - by gdm
    I was previously running Nexus 1.8 on OSX and uploading jars for releases without any issues. The OSX box died, so I moved to a FreeBSD server. Since Nexus doesn't have binaries for FreeBSD, I decided to run it in my Tomcat container. Now, I have set up Nexus 1.9 in Tomcat 7.0 on FreeBSD. Everything is working well, except I can't upload jars to my release or snapshot repositories. If I try via Hudson, I get a 401 error (and no further details). If I try manually via curl, I get an error message back from Tomcat: "This request requires HTTP authentication.". Why is Tomcat giving this error, and how do I stop it? If I look in the Nexus logs I can see that the PUT request doesn't even reach Nexus, Tomcat is intercepting it.

    Read the article

  • Why is the server performance so poor? What can be done to improve the speed of the server?

    - by fslsyed
    Very slow processing using Windows Server2008 R2 Standard with Service Pack One. Situation: Read a text file using the text data to populate a series of MS Sql tables. The converted data is used to generate monthly PDF invoice files; the PDF files are saved directly to the hard drive. The application is multi-threading with one thread used for the text conversion and three threads for PDF invoice generation. The text conversion is occurring concurrently with the invoice generation. Application Software: C# using Microsoft Visual Studio 2010 Ultimate. Crystal Report Writer 2011 with runtime 13_0_3 64 bit version. Targeted platform is x64; also tested as x86, and Any CPU with similar results. Microsoft .NET Framework 4.0. Microsoft Sql 2008 Issue: The software is running very slowly. The conversion of the text file is approximately six hundred fifty records per second and generation of the PDF files is approximately twelve invoices per minute. The text file to be converted is six hundred Meg with seven thousand invoices to be generated. The software was installed on three different machines from the same distribution files. The same text file was converted on each machine. The user executing the application was an administrator on each machine. The only variances were the machine and operating system. The configurations are as follows: Server: Operating System: Windows Server2008 R2 Standard 64-bit (6.1, Build7601) SP1 Service Pack: System Manufacturer: IBM System Model: System x3550 M3-[7944AC1]- BIOS: Default System BIOS Processor: Intel® Xeon® CPU E5620@ 2.4GHz (16 CPUs) Memory: 16384MB Notebook: Operating System: Windows 7 Home Premium Standard 64-bit (6.1, Build7601) System Manufacturer: Hewlett-Packard System Model: HP Pavilion dv7 Notebook PC BIOS: Default System BIOS Processor: AMD Phenom II N640 Dual-Core Processor 2.9GHz (2 CPUs) Memory: 6144MB Desktop: Operating System: Windows 7 Professional 64-bit (6.1, Build7601) SP1 System Manufacturer: Dell Inc. System Model: OptiPlex 960 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A11 Processor: Intel Core™2 Quad CPU Q9650 @3.00GHZ (4 CPUs) Memory: 16384MB Processing results per machine: The applications were executed seven times with the averages being displayed below. Machine Text Records Invoices Generated Converted Per Minute Per Minute Server (1) 650 12 Notebook 980 17 Desktop 2,100 45 (1) The server is dedicated to execution of this application; no additional applications are being executed. Question: Why is the server performance so poor? What can be done to improve the speed of the server?

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Issue with Exchange 2010 and Removing a Mailbox Database

    - by ThaKidd
    I did a 2003 to 2010 transition and everything is working well. During the 2010 install, a database was copied over with a random number at the end. I found out and moved three system mailboxes out of it into the database that all of the client accounts are in. I used the EMS to move those mailboxes to the other store then used the EMC to remove the mailbox database. Problem is, I am getting an error every few hours in event viewer now complaining about this database. Error is: MSExchageRepl - 4098 The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'SERVER'. Error: (nothing reported after this) Does anyone know how to fix this issue? In advance, I appreciate your help and thx for your valuable input!

    Read the article

  • Accessing C$ over LAN on Win2008R2 - cannot by hostname but can by IP and FQDN

    - by Idgoo
    Having an issue with one of our Win2k8 R2 file servers. Trying to access C$ or the Admin share is giving us an error (see error details that the bottom), however we are able to connect using the server's IP and FQDN. can access \\172.16.x.x\c$ with domain cred can access \\server.domain.local\c$ with domain creds cannot access \\servername\c$ with same domain creds Server pings fine with Hostname, IP, FQDN, the Primary DNS suffix is also correct. DNS, PTR and Wins records are all correct for the Server I have checked that I am not trying to connect with cached credentials in the Windows vault, the server is also appending primary and connection specific DNS suffixes to the hostname. Any ideas what might be causing this issue? Error Details: c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions

    Read the article

  • How do I give MacPorts privileges?

    - by cojadate
    I tried to install PostgreSQL server development libraries using MacPorts and got the following: Warning: MacPorts running without privileges. You may be unable to complete certain actions (e.g. install). ---> Computing dependencies for postgresql-server-devel ---> Dependencies to be installed: postgresql-devel ---> Building postgresql-devel Error: Target org.macports.build returned: shell command failed Error: The following dependencies failed to build: postgresql-devel Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> So I guess that means I need to running MacPorts with privileges and try again. Unfortunately I've no idea how to give MacPorts privileges. I'm running OS X 10.6.3

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >