Search Results

Search found 53962 results on 2159 pages for 'error detection'.

Page 1020/2159 | < Previous Page | 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027  | Next Page >

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • Virtualbox - differencing disk based on different differencing disk

    - by Klinki
    I'm trying to create differencing image based on differencing image in VirtualBox 4.2.18. Official documentation says it should be possible: http://www.virtualbox.org/manual/ch05.html#diffimages Basically I want to achieve this drive hierarchy: + immutable image with Debian and all software installed +---- differencing image with specific configuration, autoreset=off, readonly +-------- differencing image with autoreset=on +---- another differencing image for different virtual machine +-------- differencing image with autoreset=on I successfully created differencing image based on differencing image, but I'm not able to connect it to virtual machine :( It always shows error: Failed to open the hard disk .... cannot register hard disk ... because hard disk with UUID ... already exists Here is screenshot of Virtual Media Manager and error dialog Virtual Media Manager Window screenshot Very strange is that the new differencing image (tempdrive.vdi) doesn't have Actual Size 0. I wasn't able to connect it, but still, it has 36KB of data on it... This is very similar to this older question: How to create a chained differencing disk of another differencing disk in Virtual Box? but suggested solution is not working anymore in VirtualBox 4.2.18, so I posted it as a new question. (Limit for posting links and screenshots is quite annoying..)

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • How do I configure permissions for a cluster share using Powershell on 2008?

    - by Andrew J. Brehm
    I have a cluster resource of type "file share" but when I try to configure the "security" parameter I get the following error (excerpt): Set-ClusterParameter : Parameter 'security' does not exist on the cluster object Using cluster.exe I get a better result, namely the usual nothing when the command worked. But when I check in Failover Cluster Manager the permissions have not changed. In Server 2003 the cluster.exe method worked. Any ideas? Update: Entire command and error. PS C:\> $resource=get-clusterresource testshare PS C:\> $resource Name State Group ResourceType ---- ----- ----- ------------ testshare Offline Test File Share PS C:\> $resource|set-clusterparameter security "domain\account,grant,f" Set-ClusterParameter : Parameter 'security' does not exist on the cluster object 'testshare'. If you are trying to upda te an existing parameter, please make sure the parameter name is specified correctly. You can check for the current par ameters by passing the .NET object received from the appropriate Get-Cluster* cmdlet to "| Get-ClusterParameter". If yo u are trying to update a common property on the cluster object, you should set the property directly on the .NET object received by the appropriate Get-Cluster* cmdlet. You can check for the current common properties by passing the .NET o bject received from the appropriate Get-Cluster* cmdlet to "| fl *". If you are trying to create a new unknown paramete r, please use -Create with this Set-ClusterParameter cmdlet. At line:1 char:31 + $resource|set-clusterparameter <<<< security "domain\account,grant,f" + CategoryInfo : NotSpecified: (:) [Set-ClusterParameter], ClusterCmdletException + FullyQualifiedErrorId : Set-ClusterParameter,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand

    Read the article

  • Exchange server intermittently not receiving or delivering emails to a few addresses?

    - by Gary Willoughby
    This is a strange problem. We are using an Exchange 2007 server to handle the emails to and from the company. There are two main problems which are probably related. None of our mails sent to one single customer are ever received. When we send any type of mail to one particular customer, they never get it. We have confirmed the address and tried to send more to other mail addresses on the same domain and they still don't receive it. No error (email or otherwise) is ever issued. (Domain related? Blacklisted?) Sometimes (intermittently) a mail sent to our company (can be any address on our domain) is never received. I tried this the other day from home and sent a mail to my work address. It was never received. But then a day later i sent another and it was received fine (so the mail address is fine). No error (email or otherwise) is ever issued. Any ideas where to start looking for causes?

    Read the article

  • Nginx Not Passing URL Parameters

    - by jmccartie
    Messing around with Nginx ... for some reason, it looks like none of my URL parameters are being passed. My homepage loads fine, but a URL like "http://mysite.com/more.php?id=101" throws errors, saying that the ID is an undefined index. I'm assuming this is something basic I'm missing in a conf file. Some info: conf.d/virtual.conf server { listen 80; server_name dev.mysite.com; index index.php; root /var/www/dev.mysite.com_html; location / { root /var/www/dev.mysite.com_html; } location ~ \.php(.*)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/wap/dev.mysite.com_html/$fastcgi_script_n ame; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Error log: 2009/06/22 11:44:21 [notice] 16319#0: start worker process 16322 2009/06/22 11:44:28 [error] 16320#0: *1 FastCGI sent in stderr: "PHP Notice: Undefined index: id in /var/www/dev.mysite.com_html/more.php on line 10 Thanks in advance.

    Read the article

  • Sprite animation in openGL - Some frames are being skipped

    - by Sid
    Earlier, I was facing problems on implementing sprite animation in openGL ES. Now its being sorted up. But the problem that i am facing now is that some of my frames are being skipped when a bullet(a circle) strikes on it. What I need : A sprite animation should stop at the last frame without skipping any frame. What I did : Collision Detection function and working properly. PS : Everything is working fine but i want to implement the animation in OPENGL ONLY. Canvas won't work in my case. ------------------------ EDIT----------------------- My sprite sheet. Consider the animation from Left to right and then from top to bottom Here is an image for a better understanding. My spritesheet ... class FragileSquare{ FloatBuffer fVertexBuffer, mTextureBuffer; ByteBuffer mColorBuff; ByteBuffer mIndexBuff; int[] textures = new int[1]; public boolean beingHitFromBall = false; int numberSprites = 20; int columnInt = 4; //number of columns as int float columnFloat = 4.0f; //number of columns as float float rowFloat = 5.0f; int oldIdx; public FragileSquare() { // TODO Auto-generated constructor stub float vertices [] = {-1.0f,1.0f, //byte index 0 1.0f, 1.0f, //byte index 1 //byte index 2 -1.0f, -1.0f, 1.0f,-1.0f}; //byte index 3 float textureCoord[] = { 0.0f,0.0f, 0.25f,0.0f, 0.0f,0.20f, 0.25f,0.20f }; byte indices[] = {0, 1, 2, 1, 2, 3 }; ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4*2 * 4); // 4 vertices, 2 co-ordinates(x,y) 4 for converting in float byteBuffer.order(ByteOrder.nativeOrder()); fVertexBuffer = byteBuffer.asFloatBuffer(); fVertexBuffer.put(vertices); fVertexBuffer.position(0); ByteBuffer byteBuffer2 = ByteBuffer.allocateDirect(textureCoord.length * 4); byteBuffer2.order(ByteOrder.nativeOrder()); mTextureBuffer = byteBuffer2.asFloatBuffer(); mTextureBuffer.put(textureCoord); mTextureBuffer.position(0); } public void draw(GL10 gl){ gl.glFrontFace(GL11.GL_CW); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(1,GL10.GL_FLOAT, 0, fVertexBuffer); gl.glEnable(GL10.GL_TEXTURE_2D); if(MyRender.flag2==1){ /** Collision has taken place*/ int idx = oldIdx==(numberSprites-1) ? (numberSprites-1) : (int)((System.currentTimeMillis()%(200*numberSprites))/200); gl.glMatrixMode(GL10.GL_TEXTURE); gl.glTranslatef((idx%columnInt)/columnFloat, (idx/columnInt)/rowFloat, 0); gl.glMatrixMode(GL10.GL_MODELVIEW); oldIdx = idx; } gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //4 gl.glTexCoordPointer(2, GL10.GL_FLOAT,0, mTextureBuffer); //5 gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); //7 gl.glFrontFace(GL11.GL_CCW); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glMatrixMode(GL10.GL_TEXTURE); gl.glLoadIdentity(); gl.glMatrixMode(GL10.GL_MODELVIEW); } public void loadFragileTexture(GL10 gl, Context context, int resource) { Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resource); gl.glGenTextures(1, textures, 0); gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); }

    Read the article

  • Failed to su after making a chroot jail

    - by arepo21
    On a 64 bit CentOS host I am using script make_chroot_jail.sh to put a user in a jail, not permitting it to see anything expect it's home at /home/jail/home/user1. I did it typing this: sudo ./make_chroot_jail.sh user1 after, when trying to connect to user1 first i was getting an error like: /bin/su: user guest does not exist i have fixed this by copying some missed libraries: sudo cp /lib64/libnss_compat.so.2 /lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /lib64/libxcrypt.so.2 /home/jail/lib64/ sudo cp -r /lib64/security/ /home/jail/lib64/ But now, when trying to connect to user1 typing su user1 and then typing it's password, i am getting this error: could not open session So the question is how to connect to user1 in this situation? P.S. Here are the permissions of some files, this might be helpful in order to provide a solution: -rwsr-xr-x 1 root root /home/jail/bin/su drwxr-xr-x 4 root root /home/jail/etc -rw-r--r-- 1 root root /home/jail/etc/pam.d/su -rw-r--r-- 1 root root /home/jail/etc/passwd -rw------- 1 root root /home/jail/etc/shadow UPDATE1 After some modifications i managed to connect to user1, but the session closes immediately! I guess this a PAM issue, however cant find a way to fix it. Here the log entry for close action from /val/log/secure: Oct 6 15:19:42 localhost su: pam_unix(su:session): session closed for user user1 What makes the session to exit immediately after launching?

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • How do you setup FTP with IIS Manager Users in an NLB environment with shared IIS configs?

    - by William Jens
    I've setup a 2 node NLB cluster and used the following to share IIS configs between them. http://blogs.technet.com/b/meamcs/archive/2012/05/30/configuring-iis-7-5-shared-configuration.aspx The IIS configs and content is located on a network share via a UNC path. This works - updating IIS settings on one node, is visible in another node and my website works on the individual nodes and the cluster as whole. I'm able to setup an FTP site and successfully connect with my Windows login. However, I want to use IIS Manager Authentication as defined in: http://www.iis.net/learn/publish/using-the-ftp-service/configure-ftp-with-iis-manager-authentication-in-iis-7 I've tried using "Network Service" with the FTP COM object as well as a dedicated user account that exists on all three hosts, but every time I try to login with an IIS user I get something like the following: IISWMSVC_AUTHENTICATION_UNABLE_TO_READ_CONFIG An unexpected error occurred while retrieving the authentication information. Exception:System.Runtime.InteropServices.COMException (0x8007052E): Filename: Error: at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Management.Server.ConfigurationAuthenticationProvider.GetSection(ServerManager serverManager) Process:dllhost User=NT AUTHORITY\NETWORK SERVICE Can anyone point me in the right direction here?

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • Why is apache serving the default?

    - by Matt
    I keep adding more vhosts and enabling them but all the sites always do to the default vhost in sites-available here is what the default kind of looks like with me only changing the ip for security reasons <VirtualHost 167.889.88.88:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> and here is my other which i named some-site.net <VirtualHost *:80> ServerName some-site.net DocumentRoot "/var/www/vhosts/somesite.com/http/" <Directory "/var/www/vhosts/somesite.com/http/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> and it turned on my this command sudo a2ensite some-site.net Enabling site some-site.net. Run '/etc/init.d/apache2 reload' to activate new configuration! then i reloaded /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit the url some-site.net i get the index page that is for the default vhost...what am i doing wrong

    Read the article

  • MySQL problem with many concurrent connections

    - by user48303
    Hi, here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served. NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections. [error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7 I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • VNC failure on Xen

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • remove tasksel lamp-server

    - by RickyA
    I was tricked in running "sudo tasksel install lamp-server" on the wrong server (UBUNTU 10.4). Now I am stuck with a system where apache won't start because of a "Address already in use: make_sock: could not bind to address 0.0.0.0:80" error. I now want to remove this task, but documentation on that crappy tasksel says you cant use it to uninstall stuff (!!!???). My question is where can I see what packages it installed, and how can I get rid of a selection of them (apt-get?). I want to keep apache, but mysql, php and the other stuff can go... [edit] I managed to get rid of most of the lamp stack. (/var/logs/dpkg.log is usefull for recently installed packages). However it did something in a configuration somewhere, and now two apache intstances start at boottime. Killing the first one and starting a new one gets rid of the "could not bind at adress..." error. Does anyone know where the startup of the first one is configured?

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • Linux Mounting Problem

    - by Sam
    I have an Iomega Network Attached Storage device on my Windows network. I am trying to use a clonezilla live USB flash drive to backup my netbook to my Iomega Network Attached Storage device. The clonezilla USB flash drive runs linux. I'm having trouble getting the Network Attached Storage unit to mount using the following command: mount -t cifs -o username="myUsername" //192.168.1.100/backup /home/partimg The response from linux is: [134.730738] CIFS VFS: cifs_mount failed w/return code = -6 retrying with upper case share name [134.788461] CIFS VFS: cifs_mount failed w/return code = -6 mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I also tried adding the following to my username: username="myUsername,domain=workgroup" but that did not change the error. I am able to ping the network attached storage unit from linux on my netbook. I also booted from a Slax Live USB Flash Drive and Slax auto-mounted my network attached storage unit via Samba. Unfortunately, I don't believe that I can run clonezilla from inside the Slax installation. Does anyone have any insight about what is wrong with my mount statement? Or is there something peculiar about Iomega drives which makes this impossible?

    Read the article

  • VPN from Windows XP to OpenSwan: correct setup?

    - by Gnudiff
    Main question is what I am doing wrong in my OpenSwan or L2TP client setup? I am trying to create a Linux OpenSwan VPN connection from Windows XP machine, using preshared key and the builtin Windows XP L2TP IPsec option. I have followed the instructions in Linux Home networking Wiki for setting up OpenSwan and a guide to making it work with the Windows XP client, but am now stuck. The net setup is as follows: [my windows client, private IP A]<->[f/wall B]<-internet->[g/w X]<->[Linux OpenSwan server Y] A - private subnet /24 B - internet address X - internet address /24 Y - internet address on same subnet as X What I essentially want is for computer with A address to feel and work, as if it was in X subnet for purposes of outgoing and incoming TCP and UDP connections. My OpenSwan setup is as follows: /etc/ipsec.conf (AAA and YYY indicates ip address parts of A and Y addresses): conn net-to-net authby=secret left=B leftsubnet=AAA.AAA.AAA.0/24 leftnexthop=%defaultroute right=Y rightsubnet=YYY.YYY.YYY.0/24 rightnexthop=B auto=start the secret in /etc/ipsec.secrets is listed as: B Y : PSK "0xMysecretkey" where B & Y stand for respective IP adresses of gateway B and linux server Y My L2TP WinXP setup is: IP of destination: Y don't prompt for username security options: typical, require secured pass, don't require data encryption, IPSec PSK set to 0xMysecretkey networking options: VPN Type: L2TP IPSec VPN; TCPIP protocol (with automatic IP address assignment) and QOS packet schedulers enabled The error I get from Windows client is 789: "error during initial negotiation"

    Read the article

< Previous Page | 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027  | Next Page >