Search Results

Search found 53962 results on 2159 pages for 'error detection'.

Page 1029/2159 | < Previous Page | 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036  | Next Page >

  • PHP Causing Segmentation Fault & Apache Blank Response

    - by Joe
    I recently updated a Debian Lenny server to php 5.3,5 using the dotdeb source. Soon after doing so certain (but not all) sites on the server stopped responding to requests. A blank response would be returned - no headers, no content, nothing. I found this related question on stackoverflow which seems to describe something similar and used the same code the user had in their answer to see if I could replicate the issue: <?php class A { public function __construct() { new B; } } class B { public function __construct() { new A; } } new A; print 'Loaded Class A'; ?> This triggered the problem - the page returned absolutely nothing despite the original question stating this was fixed in PHP 5.5.0. No CPU block as you'd expect, no wait, just an almost instant zero response. I then ran this same code from the cli (php -f test.php) and the only output I got was 'Segmentation fault'. Tailing the kernel log I've spotted: Feb 16 07:04:06 creature kernel: [192203.269037] php[17710] general protection ip:76ef37 sp:7fff155e9bb0 error:0 in php5[400000+870000] Feb 16 08:57:31 creature kernel: [199639.699854] apache2[31136]: segfault at 7fff13a84fe0 ip 7f730514ea40 sp 7fff13a85008 error 6 in libphp5.so[7f7304ce8000+915000] All extremely odd and I'm not sure what it's pointing to/what I should do to debug this further. As I said some sites work but code such as the above definitely trigger it. Not that the sites I want to server have code like that - it's just an example. Any help is much appreciated!

    Read the article

  • Scripted forwarding for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • How to deploy website in IIS with a host name?

    - by Jayakumar
    I try to host my application in IIS. Below are the steps that I follow: Publish the code and place it in a path. Open IIS, right click on "sites" and select "Add Website". In that dialog I gave the site name and selected the app pool created for the application. I selected the physical path of the published code. I left the IP and port in the binding section without changes. and, finally, gave the host name as fus.km.com. When I try to browse the application the page is not Loading "Internet Explorer cannot display the Page" The machine domain is km.com UPDATE I tried to add the host name to the host file and flushed the DNS. The application asked for user credentials (I use windows Authentication in the application). But it did not login. On repeated tries it throws the error: HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. I tried with different user to login but I get the same result.

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • Multiple rack apps on nginx + passenger, one as root, the other not...config help

    - by cannikin
    So I've got two apps I want to run on a server. One app I would like to be the "default" app--that is, all URLs should be sent this app by default, except for a certain path, lets call it /foo: http://mydomain.com/ -> app1 http://mydomain.com/apples -> app1 http://mydomain.com/foo -> app2 My two rack apps are installed like so: /var /www /apps /app1 app.rb config.ru /public /app2 app.rb config.ru /public app1 -> apps/app1/public app2 -> apps/app2/public (app1 and app2 are symlinks to their respective apps' public directories). This is the Passenger setup for sub URIs described here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri With the following config I've got /foo going to app2: server { listen 80; server_name mydomain.com; root /var/www; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; location /foo { rewrite ^.*$ /app2 last; } } Now, how do I get app1 to pick up everything else? I've tried the following (placed after the location /foo directive), but I get a 500 with an infinite internal redirect in error.log: location / { rewrite ^(.*)$ /app1$1 last; } I hoped that the last directive would prevent that infinite redirect, but I guess not. /foo gets the same error. Any ideas? Thanks!

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • OpenSWAN KLIPS not working

    - by bonzi
    I am trying to setup IPSec between 2 VM launched by OpenNebula. I'm using OpenSWAN for that. This is the ipsec.conf file config setup oe=off interfaces=%defaultroute protostack=klips conn host-to-host left=10.141.0.135 # Local IP address connaddrfamily=ipv4 leftrsasigkey=key right=10.141.0.132 # Remote IP address rightrsasigkey=key ike=aes128 # IKE algorithms (AES cipher) esp=aes128 # ESP algorithns (AES cipher) auto=add pfs=yes forceencaps=yes type=tunnel I'm able to establish the connection with netkey but klips doesnt work. ipsec barf shows #71: ERROR: asynchronous network error report on eth0 (sport=500) for message to 10.141.0.132 port 500, complainant 10.141.0.135: No route to host [errno 113, origin ICMP type 3 code 1 (not authenticated)] Tcpdump shows 22:50:20.592685 IP 10.141.0.132.isakmp > 10.141.0.135.isakmp: isakmp: phase 1 I ident 22:50:25.602182 ARP, Request who-has 10.141.0.135 tell 10.141.0.132, length 46 22:50:26.602082 ARP, Request who-has 10.141.0.135 tell 10.141.0.132, length 46 22:50:27.601985 ARP, Request who-has 10.141.0.135 tell 10.141.0.132, length 46 ipsec eroute shows 0 10.141.0.135/32 -> 10.141.0.132/32 => %trap What could be the problem?

    Read the article

  • Blank list of windows services

    - by Joe
    Recently when I open windows services (always as administrator) I get a blank list of services: When I try and click on one of the empty lines I get this "Script Error" message: This happens over and over again, after several times I restarted my computer. I can't pinpoint exactly when this started happening or if I made any specific changes to my computer at that time. Someone told my to try running scf /scannow as administrator, but when I try to do that the scan stops at 34% and I get the message: "Windows Resource Protection could not perform the requested operation." I am running Windows 7 Enterprise 64 bit, and I would really like to avoid reinstalling windows. Does anyone know how to fix this? Edit - Here is another attempt I made and some more information that might help: Following WhoIsRich's suggestion, I tried the command sfc /scannow /offbootdir=c:\ /offwindir=c:\windows. This gave the error message "The arguments passed to sfc are invalid. The offline windows directory specified points to the online system", and then I realized this command is meant to be run after booting from another system. Since I don't have my windows installation disk right now, I used my own system to create a recovery disk, and then restarted my computer and used the recovery disk to boot. I then ran the above command, and I got the following message: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log". I then restarted my computer and let it boot up normally. The problem with windows services persists, and the CBS.log file is a long log file with many entries, and I don't know if there is useful information in it, and if there is, how to find it.

    Read the article

  • Password Authentication Fails - NTLMv2

    - by JMeterX
    Environment: Windows 2000 sp4 EDIT: Domain Controller with no trust setup with the Win2008 Server Windows XP machines Windows 2008 Server Netapp NAS Problem: We have a shared folder that resides on a NAS using a Windows 2008 AD for the authentication with the proper permissions setup. When the Windows 2000 machine tries to open the share residing on the Win2008 machine, it is prompted for a username and password. Upon entering the credentials it continuously re-asks for credentials. Important Details: The Windows 2000 machine can ping both the XP machines and the Windows 2008 Server The Windows 2008 machine is mandated to only use NTLMv2 The Windows 2000 machine was originally set to NTLM but was recently switched to NTLMv2 if negotiated for the purpose of trying to connect to the share. As I am sure it will come up, we are using Windows 2000 because of contractual obligations Questions: Why is password Authentication failing in this case? After setting a GPO for the Win2000 machine for it to use NTLMv2, do we need to reboot the machine for the changes to take affect? We used SECEDIT to update the GPOs without rebooting. UPDATE We checked both of the 2008 Domain Controllers to find an error code. We received: Microsoft_Auth_Package_V1_0 0xc000006a Event ID: 4776 I know this to be an authentication error via THIS article "The value provided as the current password is not correct" We know this password to be correct, but since these two domains (Win2000 & Win2008) do not have a trust setup what authentication account needs to be used? One that resides on the Win2000 hosted domain?

    Read the article

  • Apache2 Re-Routing from Domain Name to Internal IP Address

    - by Richard Grey
    The problem that I am having, is that when someone goes to my domain name example.co.uk, for some reason, apache seems to be re-routing the request to the internal IP address of the server, i.e. 192.168.0.52 My Apache2 default sites enabled file is as follows: ServerAdmin [email protected] ServerName trusteeguard.co.uk ServerAlias www.trusteeguard.co.uk DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/trusteeguard-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/trusteeguard-access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> This is an Ubuntu box if that is any help ;)

    Read the article

  • Can't find gnutls ibrary when executing rpmbuild under non-root

    - by Rilindo
    I am trying to build ntgs from the latest source, using the .spec from rpmforge - as non-root via rpmbuild. During the compile, it fails at this step: checking for GNUTLS... no configure: error: ntfsprogs crypto code requires the gnutls library. error: Bad exit status from /var/tmp/rpm-tmp.78913 (%build) However, I can compile it successfully outside of rpmbuild. So it sounds like it just the matter of library being seen during the build. However, I can confirm that rpmbuild can see the library that gnutls resides: [foo@bar ~]$ rpmbuild -E '%{_libdir}' rpmbuild/SPECS/ntfsprogs.spec /usr/lib Library location: [foo@bar ntfs-3g_ntfsprogs-2012.1.15]$ /sbin/ldconfig -p | grep -i gnutls libgnutls.so.13 (libc6) => /usr/lib/libgnutls.so.13 libgnutls.so (libc6) => /usr/lib/libgnutls.so libgnutls-openssl.so.13 (libc6) => /usr/lib/libgnutls-openssl.so.13 libgnutls-openssl.so (libc6) => /usr/lib/libgnutls-openssl.so libgnutls-extra.so.13 (libc6) => /usr/lib/libgnutls-extra.so.13 libgnutls-extra.so (libc6) => /usr/lib/libgnutls-extra.so What would cause the problem of the library not being seen when you build a RPM? EDIT: Oh yeah, I am running Centos 5.5.

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Setting up SSL on JBoss 5

    - by socal_javaguy
    How can I enable SSL on JBoss 5 on a Linux (Red Hat - Fedora 8) box? What I've done so far is: (1) Create a test keystore. (2) Placed the newly generated server.keystore in $JBOSS_HOME/server/default/conf (3) Make the following change in the server.xml in $JBOSS_HOME/server/default/deploy/jbossweb.sar to include this: <!-- SSL/TLS Connector configuration using the admin devl guide keystore --> <Connector protocol="HTTP/1.1" SSLEnabled="true" port="8443" address="${jboss.bind.address}" scheme="https" secure="true" clientAuth="false" keystoreFile="${jboss.server.home.dir}/conf/server.keystore" keystorePass="mypassword" sslProtocol = "TLS" /> (4) The problem is that when JBoss starts it logs this exception (during start-up) (but I am still able to view everything under http://localhost:8080/): 03:59:54,780 ERROR [Http11Protocol] Error initializing endpoint java.io.IOException: Cannot recover key at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:456) at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:139) at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:498) at org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:175) at org.apache.catalina.connector.Connector.initialize(Connector.java:1029) at org.apache.catalina.core.StandardService.initialize(StandardService.java:683) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:821) at org.jboss.web.tomcat.service.deployers.TomcatService.startService(TomcatService.java:313) I do know that's there's more to be done to enable full SSL client authentication....

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • mod rewrite works fine apart from for missing directory index files

    - by j w
    I have a legacy web site hosted on Apache. It has a number of web pages sitting in the public web root and its subfolders. publicDocs/ directorywith_no_defaultfile/ some-legacy-flat-page.htm .htaccess index.php some-legacy-flat-page.htm I would like to start using Zend MVC for some of the newer pages. I have got a .htaccess mod rewrite rule working so that any request for a non-existent file is sent to be handled by the MVC bootstrap file (/index.php). With my current set-up, the following types of requests are routed to '/index.php', the MVC bootstrap: /index.php /blah /directorywith_no_defaultfile/bloo The following types of request are served by old legacy (flat) pages /some-legacy-flat-page.htm /directorywith_no_defaultfile/some-legacy-flat-page.htm But, when I a request a non-existent file that is a directory like these: /directorywith_no_defaultfile or /directorywith_no_defaultfile/ I get an error: Forbidden You don't have permission to access /directorywith_no_defaultfile/ on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I suspect this may have something to do with the way Apache handles default files. Do you know which Apache directives could be causing this?

    Read the article

  • Office 2003 Service Pack 3- Not able to install

    - by kabirrao
    I am trying to install Office 2003 SP3 on a windows 2003 EE server (used as a terminal server) which already have office 2003 SP2. I am getting an error that says "Update can not be applied". Below are the eventviewer entries for Application: _ Event Type: Warning Event Source: MsiInstaller Event Category: None Event ID: 1015 Date: 1-2-2010 Time: 5:51:22 User: Domain\domainadmin Computer: TER01 Description: Failed to connect to server. Error: 0x800401F0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. _ Event Type: Information Event Source: MsiInstaller Event Category: None Event ID: 11708 Date: 1-2-2010 Time: 5:52:23 User: Domain\domainadmin Computer: TER01 Description: Product: Microsoft Office Professional Edition 2003 -- Installation failed. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 7b 39 30 31 31 30 34 30 {9011040 0008: 39 2d 36 30 30 30 2d 31 9-6000-1 0010: 31 44 33 2d 38 43 46 45 1D3-8CFE 0018: 2d 30 31 35 30 30 34 38 -0150048 0020: 33 38 33 43 39 7d 383C9} _ Event Type: Information Event Source: McLogEvent Event Category: None Event ID: 257 Date: 1-2-2010 Time: 5:52:23 User: NT AUTHORITY\SYSTEM Computer: TER01 Description: Would be blocked by access protection rule (rule is in warn-only mode) (Common Standard Protection:Prevent common programs from running files from the Temp folder).

    Read the article

  • Home-made HTTP proxy server [closed]

    - by Martin Dimitrov
    I wanted to help a friend who has some restrictions at work to visit certain sites. Locally, on a Windows 7 machine, I run Apache server and decided to make it a proxy just for the IP of my friend. So I added the following to the configuration file: ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Deny from all Allow from <his.ip> </Proxy> It worked fine. But shortly the proxy started to receive many requests of the form: 66.249.66.242 - - [22/Sep/2012:11:01:12 +0300] "GET /search?hl=en&lr=lang_en&as_qdr=all&ie=UTF-8&q=related:www.aarp.org/aarp-foundation/+allinurl:+foundation&tbo=1&sa=X&ei=BSy2T9L_L8PitQapwtHtBw&ved=0COQBEB8wPw HTTP/1.1" 403 208 66.249.71.36 - - [22/Sep/2012:11:01:49 +0300] "GET /search?hl=en&lr=lang_en&as_qdr=all&ie=UTF-8&q=related:www.aarp.org/aarp-foundation/+allinurl:+foundation&tbo=1&sa=X&ei=BOCzT-_WK8_0sgbki5XCDA&ved=0COABEB8wPg HTTP/1.1" 403 208 These are Google IPs. The requests are every 30 seconds or so. My friend is not at work today. In apache_error.log I see: [Sat Sep 22 11:09:20 2012] [error] [client 66.249.66.242] client denied by server configuration: C:/wamp/www/aclk [Sat Sep 22 11:09:47 2012] [error] [client 66.249.71.36] client denied by server configuration: C:/wamp/www/search What the hell is going on? Please, help.

    Read the article

  • OS X can connect to Windows machine, but can't access shared folders

    - by Bonnie
    I can create new folders on my Windows XP machine, set them to "shared". On my Mac, I pick Finder → Go → Connect to Server → smb://192.168.1.4 → Connect → Name / Password. It even shows me all the names of the newly created shared-folders on my PC, but when I try to actually connect to any of them I get connection failed, there was an error connecting Any idea on what would cause that? The fact that it successfully gets so far—to actually showing me my PC share-names—must mean I have 99% of this working correctly, i.e. the physical connection, the IP address, the user name, the password, etc. Still, I can't seem to access the folders themselves. I've tried this with my Windows XP firewall on/off, and Norton AntiVirus on/off. Same problem. Everything did work fine, 4 months ago. Were there any odd OS X or Windows updates released recently? I always apply them all. smbclient on the Mac does correctly find the XP machine, my XP user name, and accepts my XP password. I get the following from that smbclient command: Doing spnego session setup (blob length=16) server didn't supply a full spnego negprot Got challenge flags: ... Got NTLMSSP flags: ... Got NTLMSP flags: ... Domain=[XPMACHINE] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] tree connect failed: NT_STATUS_INSUFF_SERVER_RESOURCES I'm not sure why a standard XP box can't "supply a full spnego negprot". Whatever that means. Using XP's RegEdit to change my IRPStackSize from 11... to 13, 15, 20, 22... still gives that "NT_STATUS_INSUFF_SERVER_RESOURCES" error on the Mac.

    Read the article

  • Symantec Backup Exec 12 Tape Alert.

    - by Adam
    Every day, I run 5 backups using 6 tapes. Each day, when I run the inventory, I get a tape alert Error. This occurs every day, on the same job. The error is: Job 'Inventory Daily ********' has reported Multiple Tape Alerts on server '******' Please refer to job log *****.xml for more information. When i look at the Job log, the Utility Job Information says: The device has reported the following TapeAlert diagnostic information Information- The library has been manually turned offline and is unavailable for use. Robotic library for device: PV132T 500 Warning - Library security has been compromised. Robotic Library for device: PV132T 500. Critical - The library has detected a inconsistency in its inventory. 1.Redo the library inventory to correct the inconsistency. 2. Restart the operation. Check the applications users manual or hardware users manual for specific instructions on redoing the library inventory. Roboric Library for Device PV132T 500. When I run the same inventory for a second time, the job completes successfully. I am using Symantec Backup Exec 12 running on Windows Server 2008. I am using a Dell Powervault 132T 500 tape drive. If anyone can help me on how to resolve this problem, it would be very much appreciated.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql however only one database is running and 5-10 pages are only running. However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql i am running a music wesbite however only one database is running and 5-10 pages are only running.when i click on whm show processlist it shows only 2-3 processes However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. my.conf file is [mysqld] key_buffer = 1536M max_allowed_packet = 1M max_connections = 250 max_user_connections = 15 wait_timeout=40 connect_timeout=10 table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M server-id = 14 old-passwords = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • KeePass lost password and/or corruption due to Dropbox/KeePassX

    - by GummiV
    I started using Keepass about a month ago to hold my passwords and online accounts info. Everything was stored in a single .kdb file, only protected with a password. I'm using Windows 7. Now Keepass can't open my .kdb file with the error "Invalid/wrong key". I'm fairly confident I have the right password. Altough I might have mixed up a few letters I've tried about two dozen different combinations to minimize that possibility - but can't rule it out though. My guess is however that the .kdb file got corrupted, either due to Dropbox syncing (only using it on one computer though) or because I edited the file using KeePassX on Ubuntu (dual boot on the same computer, accessing a mounted Win7 NTFS partition), or possibly a combination of both. I have tried restoring older versions(even the original one) from Dropbox and trying out all possible passwords without any luck. (which does seem to rule out KeePassX as the culprit, since oldest copies are before I edited the file from Ubuntu) I have tried opening the file with the "Repair KeePass Database file" which always gives the "0xA Invalid/corrupt file structure" (the same error for when a wrong password is typed). I was wondering if there was any way for me to salvage my hard-gathered data. I know generally that brute force cracking is not feasible, but since I can remember probably more than half of the usernames/passwords, any maybe the fact that one of them does come up fairly often (my go-to pass for trivial stuff), that might simplify the brute force process to a doable time frame. Maybe the brute-force thing might incorporate the fact that I know the password length and what characters it's made from. (If we assume corruption, not a password-blackout on my part) I could do some programming if there are any libraries or routines that I could use. Other people seem to have had a similar probem http://forums.dropbox.com/topic.php?id=6199 http://forums.dropbox.com/topic.php?id=9139 http://www.keepassx.org/forum/viewtopic.php?t=1967&f=1 So hopefully this question will become a suitible resource for people when searching the web. Feel free to tell me if you think this should rather be a community wiki.

    Read the article

  • Solaris Mysql Failure and Unable to Restart

    - by Iscariot
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' when typing mysql as root it says mysql: not found. The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it.

    Read the article

< Previous Page | 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036  | Next Page >