Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 1202/1257 | < Previous Page | 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209  | Next Page >

  • Why do some games randomly turn my screen a random solid color?

    - by Emlena.PhD
    When playing some games my computer will randomly have an error that I cannot fix without turning it off and back on again. The screen changes to one solid color, which varies (off the top of my head I can remember seeing solid green, magenta, etc..) and the sound blares a single tone. The sound sometimes briefly restores and I can still hear the game sounds and even hear and still be heard by people in my Mumble channel, but the screen doesn't right itself so I'm still blind. What's more is this happens in some games but not in others. While the game is actually running, not while I'm still in the menu. However, it does happen if I'm afk or idle but the game world is still rendering. Games where the error occurs: League of Legends World of Warcraft Trine The Sims 2 Dungeon Defenders Safe games: games where it has never occurred: Tribes: Ascend Star Wars: the Old Republic Battlefield 3 So relatively older games cause the problem while newer games do not? I cannot predict when it will happen, it just seems random. However, if it happens and I try playing the same game further after restart it does appear to occur more frequently after the first time. But if I switch to a safe game it doesn't continue happening. Both of my RAM sticks appear fine, flipped position or either one on their own and games still run, computer still boots. I would think over-heating, but then why not all games? ALso, sometimes it happens immediately after I start playing, within seconds of the 3D world booting up. I'm looking to upgrade very soon so I want to figure out what component or software is fubar and replace/repair it. Any suggestions or recommendations of tools would be helpful. Below is some system information. Dxdiag does not detect any problems. Operating System: Windows 7 Home Premium 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120305-1505) System Manufacturer: Gigabyte Technology Co., Ltd. System Model: EP45-UD3R BIOS: Award Modular BIOS v6.00PG Processor: Intel(R) Core(TM)2 Duo CPU E8500 @ 3.16GHz (2 CPUs), ~3.2GHz Memory: 4096MB RAM DirectX Version: DirectX 11 DxDiag Version: 6.01.7601.17514 64bit Unicode Graphics card name: NVIDIA GeForce GTX 285 Driver Version: 8.17.12.9610 (error has occurred w/several driver versions) Sound: I do not have a sound card, been using motherboard's built in sound)

    Read the article

  • Dangers of Running Computers w/o Air Conditioning

    - by Daniel Bingham
    I recently moved in to an apartment with out air conditioning. This is fine most of the time as I am in upstate New York. It only ever gets above the high 70s during the hottest of the summer months. And when it does, I'm stubborn enough that I'll just deal with wearing minimal clothing around the house. However, I'm worried about my computers. I'm a software developer and gamer, so many of my machines are very high powered. And at least one of them is a server that must be left on 24/7 (not just a game server - also serves multiple websites). I've never before had to worry about the heat too much, as I always lived in buildings with central air. The in building temperature rarely got much above 70 F. All of the machines I built had good enough air cooling that I never saw a problem. Now the temperature in building is pushing 100F and I'm worried that the machines will not be able to keep themselves cool enough by simply blowing already hot air over themselves. The hottest of them I've turned off. However, the server I cannot. It's an old Dell (not custom build) that runs on a Pentium 4 (2.2GHz). It only has a single hard drive, integrated video. And it'd not running any processor intensive servers. Just basic LAMP. It used to run a MUD server, but that's off for now. So it should be idling most of the time. I haven't been able to find any sort of built in temperature sensors in the hardware... at least not any that the programs I've found in the Debian repository can read. And it's an inherited machine to which I do not have the full specs, so I don't know the tolerances anyway. How worried should I be about it melting down on me? How worried should I be about the hard drive melting or becoming corrupted? To generalize the question for other people, what are the safe temperature tolerances for most machines. How widely does it vary, and how does one go about determining when their machine is running too hot and needs to be shut down?

    Read the article

  • SSL connection error during handshake on Windows Server 2008 R2

    - by Thomas
    I have a Windows 2008 R2 Server that runs a HTTPS Tunneling service. The software uses a certificate that is provided via the Windows certificate store. The certificate is located in the local computer private certificates. It supports server and client authentication with signing and keyencipherment. Cert chain The certificate chain looks fine. It's a Thawte SSL123 certificate. Thawte Premium Server CA (SHA1) [?e0 ab 05 94 20 72 54 93 05 60 62 02 36 70 f7 cd 2e fc 66 66] thawte Primary Root CA [?1f a4 90 d1 d4 95 79 42 cd 23 54 5f 6e 82 3d 00 00 79 6e a2] Thawte DV SSL CA [3c a9 58 f3 e7 d6 83 7e 1c 1a cf 8b 0f 6a 2e 6d 48 7d 67 62] Server certificate Issues Most browsers accept the certificate without any warning. But IE 7 on Windows XP SP3 and Opera 12 on OSX just report an connection error. Opera complains: Secure connection: fatal error (552) https://www.example.com/ Opera was not able to connect to the server, because the server does not communicate via any secure protocol known to Opera. A connection test using openssl s_client -connect www.example.com:443 -state says: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A 52471:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-35.1/src/ssl/s23_lib.c:182: ssldump -aAHd host www.example.com during curl https://www.example.com/ reports: New TCP connection #1: localhost(53302) <-> www.example.com(443) 1 1 0.0235 (0.0235) C>SV3.1(117) Handshake ClientHello Version 3.1 random[32]= 50 77 56 29 e8 23 82 3b 7f e0 ae 2d c1 31 cb ac 38 01 31 85 4f 91 39 c1 04 32 a6 68 25 cd a0 c1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f Unknown value 0x9a Unknown value 0x99 Unknown value 0x96 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 0.0479 (0.0243) S>C TCP FIN 1 0.0481 (0.0002) C>S TCP FIN Thawte provides two Java based SSL Checkers. The Legacy Thawte SSL Certificate Installation Checker and the sslToolBox. Both validate the certificate under Windows XP but report connection errors under OSX and Windows 2008 R2.

    Read the article

  • "Error 1067: The process terminated unexpectedly" when trying to install MySQL on Win7 x64.

    - by Gravitas
    Hi, I've run into a brick wall trying to install MySQL v5.5 on my machine. My PC is Windows 7 x64, Enterprise edition. MySQL installs fine, but when I run the "MySQL Instance Configuration Wizard", it pauses forever on the step "Start Service" (I can let it run for 30 minutes with no response). If I go into services, I see that the "MySQL" service hasn't started, and if I try to start it, it says "Windows could not start MySQL Service on Local Computer. Error 1067: The process terminated unexpectedly." I've tried the following: Turning off firewall. Uninstalling all antivirus software. Installing / reinstalling 32-bit version of MySQL. Installing / reinstalling 64-bit version of MySQL. Uninstalling, deleting the contents of "C:\program files\MySQL" and "C:\program files (x86)\MySQL", reinstalling. Checking to see that there is no rogue services named MySQL???? (from a previous install). Checking that port 3306 is not used by an alternate program. Changing the default port that MySQL uses. Checking for "my.ini" and "my.ini.cnf" in "C:\windows" (nothing there but that can cause a problem). Running both MySQL installer, and configuration wizard, in "Adminstrator mode". Turning off UAC. Installing with defaults, not changing anything. Rebooting my machine (about 6 reboots so far). Opening up port 3306 in the firewall (both TCP and UDP, inbound and outbound). Swearing at the klutz of a programmer who designed MySQL so you can't even install it (as if that would help!) My machine is working 100% in every other way. InfiniDB (a MySQL compatible database) installs 100%, as does Visual Studio 2010, Microsoft SQL Server, etc, etc. Your advice on how to work around this? p.s. Here is the screen it got stuck on for 15 minutes until I killed the process: Update 2010-12-20 Tried MySQL v5.1, it didn't work either. Its amazing - if you type "mysqld /?", or "mysqld -help", it doesn't give you any help. And, if you try to restart the service manually, it doesn't display any error messages. Could it be any more unhelpful? Update 2010-12-21 Installed MySQL 6.0 alpha, and it worked. However, I'd rather not use an alpha release, given that the "stable" release is anything but :( Update 2010-12-21 Found http://dev.mysql.com/doc/refman/5.1/en/windows-troubleshooting.html, dealing with troubleshooting under Windows. Discovered that you can generate an error log if the service doesn't start - see here: http://dev.mysql.com/doc/refman/5.1/en/error-log.html

    Read the article

  • Trying to connect phpMyAdmin to remote mySQL server ( 2002: can't connect )

    - by Malcolm Jones
    Trying to get phpMyAdmin to talk to a remote mySQL server. The config is below and there is already a user set up in mySQL DB to be able to log in from the specified host that PMA sits on. Hosting is provided by Rackspace (Rightscale) and both cloud servers behind the same firewall. [config.inc.php] <?php $cfg['blowfish_secret'] = ''; $i = 0; $i++; $cfg['Servers'][$i]['host'] = 'XX.XX.XX.XX'; // MySQL hostname or IP address $cfg['Servers'][$i]['port'] = ''; // MySQL port - leave blank for default port $cfg['Servers'][$i]['socket'] = ''; // Path to the socket - leave blank for default socket $cfg['Servers'][$i]['connect_type'] = 'tcp'; // How to connect to MySQL server ('tcp' or 'socket') $cfg['Servers'][$i]['extension'] = 'mysql'; // The php MySQL extension to use ('mysql' or 'mysqli') $cfg['Servers'][$i]['compress'] = FALSE; // Use compressed protocol for the MySQL connection // (requires PHP >= 4.3.0) $cfg['Servers'][$i]['controluser'] = ''; // MySQL control user settings // (this user must have read-only $cfg['Servers'][$i]['controlpass'] = ''; // access to the "mysql/user" // and "mysql/db" tables). // The controluser is also // used for all relational // features (pmadb) $cfg['Servers'][$i]['auth_type'] = 'config'; // Authentication method (config, http or cookie based)? $cfg['Servers'][$i]['user'] = 'USERNAME'; // MySQL user $cfg['Servers'][$i]['password'] = 'PASSWORD'; // MySQL password (only needed // with 'config' auth_type) $cfg['Servers'][$i]['only_db'] = ''; // If set to a db-name, only // this db is displayed in left frame // It may also be an array of db-names, where sorting order is relevant. $cfg['Servers'][$i]['hide_db'] = ''; // Database name to be hidden from listings $cfg['Servers'][$i]['verbose'] = ''; // Verbose name for this host - leave blank to show the hostname $cfg['Servers'][$i]['pmadb'] = ''; // Database used for Relation, Bookmark and PDF Features // (see scripts/create_tables.sql) // - leave blank for no support // DEFAULT: 'phpmyadmin' $cfg['Servers'][$i]['bookmarktable'] = ''; // Bookmark table // - leave blank for no bookmark support // DEFAULT: 'pma_bookmark' $cfg['Servers'][$i]['relation'] = ''; // table to describe the relation between links (see doc) // - leave blank for no relation-links support // DEFAULT: 'pma_relation' $cfg['Servers'][$i]['table_info'] = ''; // table to describe the display fields // - leave blank for no display fields support // DEFAULT: 'pma_table_info' $cfg['Servers'][$i]['table_coords'] = ''; // table to describe the tables position for the PDF schema // - leave blank for no PDF schema support // DEFAULT: 'pma_table_coords' $cfg['Servers'][$i]['pdf_pages'] = ''; // table to describe pages of relationpdf // - leave blank if you don't want to use this // DEFAULT: 'pma_pdf_pages' $cfg['Servers'][$i]['column_info'] = ''; // table to store column information // - leave blank for no column comments/mime types // DEFAULT: 'pma_column_info' $cfg['Servers'][$i]['history'] = ''; // table to store SQL history // - leave blank for no SQL query history // DEFAULT: 'pma_history' $cfg['Servers'][$i]['verbose_check'] = TRUE; // set to FALSE if you know that your pma_* tables // are up to date. This prevents compatibility // checks and thereby increases performance. $cfg['Servers'][$i]['AllowRoot'] = TRUE; // whether to allow root login $cfg['Servers'][$i]['AllowDeny']['order'] // Host authentication order, leave blank to not use = ''; $cfg['Servers'][$i]['AllowDeny']['rules'] // Host authentication rules, leave blank for defaults = array(); Please let me know if you need anymore info. -- Malcolm

    Read the article

  • VBA + Polymorphism: Override worksheet functions from 3rd party

    - by phi
    my company makes extensive use of a data provider using a (closed source) VBA plugin. In principal, every query follows follows a certain structure: Fill one cell with a formula, where arguments to the formula specify the query the range of that formula is extended (not an arrray formula!) and cells below/right are filled with data For this to work, however, a user has to have a terminal program installed on the machine, as well as a com-plugin referenced in VBA/Excel. My Problem These Excelsheets are used and extended by multiple users, and not all of them have access to the data provider. While they can open the sheet, it will recalculate and the data will be gone. However, frequent recalculation is required. I would like every user to be able to use the sheets, without executing a very specific set of formulas. Attempts remove the reference on those computers where I do not have terminal access. This generates a NAME error i the cell containing the query (acceptable), but this query overrides parts of the data (not acceptable) If you allow the program to refresh, all data will be gone after a failed query Replace all formulas with the plain-text result in the respective cells (press a button and loop over every cell...). Obviously destroys any refresh-capabilities the querys offer for all subsequent users, so pretty bad, too. A theoretical idea, and I'm not sure how to implement it: Replace the functions offered by the plugin with something that will be called either first (and relay the query through to the original function, if thats available) or instead of the original function (by only deploying the solution on non-terminal machines), which just returns the original value. More specifically, if my query function is used like this: =GETALLDATA(Startdate, Enddate, Stockticker, etc) I would like to transparently swap the function behind the call. Do you see any hope, or am I lost? I appreciate your help. PS: Of course I'm talking about Bloomberg... Some additional points to clarify issues raise by Frank: The formula in the sheets may not be changed. This is mission-critical software, and its way too complex for any sane person to try and touch it. Only excel and VBA may be used (which is the reason for the previous point...) It would be sufficient to prevent execution of these few specific formulas/functions on a specific machine for all excel sheets to come This looks more and more like a problem for stackoverflow ;-)

    Read the article

  • EMC/Legato/Networker Failed to recover files : Cross Platform Recovery not supported.

    - by marc.riera
    Software used to backup: EMC / Legato Networker legato server : windows legato clients: same hardware (2 years ago fedora something , now ubuntu ) Trying to recover from an old client, which is no longer available. So this is the thing. On 07/20/2008 we backed up a samba server(fedora something) to a tape , setting 1 year as browse policy and retention policy. Now this tape is recyclable. We took down the dns name. We deleted the legato client configuration. That legato client was reinstalled and is doing other stuff on ubuntu 10.04, with a different name but same ip. Now, 2 years and some month later #### Now we need to recover a folder from 2008 backup, on the fedora-samba-server. First thing, legato does not show the client name because the config was deleted. We create it again. We just set the old dns back on track, pointing the same ip, where the old server was, same MAC address ;). We created a new 'old client configuration' pointing to the new server. (different legato ip for client "I suppose" ) The ssid where the needed folder is on 2 tapes, 20 and 22. The index for that backup is on tape 21. We put this tapes on the jukebox (IBMT4000) -- not important for the issue -- All three tapes expired its browsable and recoverable time. So they are on recyclable. We get the clone id from the ssid with following command: mminfo -avot -q "ssid=<ssid>" -r cloneid We set the tapes to notrecyclable nsrmm -S <ssid>/<cloneid> -o notrecyclable We change the retention for the tapes for a future date nsrmm -S <ssid> -e 01/20/2011 We check the dates are correct : mminf -avV -q "ssid=<ssid>" -r ssbrowse(26),ssretent(26),savetime So far its OK. We close the terminal. Restart the server, just for being sure. Finally, we recover the index for that ssid where the folder should be. nsrck -L7 -t "07/20/2008" oldservername.domain.org There, we open the Networker User, select the server, select the old client as source, select the new client as destination. And this is what I get. imgur image of output -- http://i.imgur.com/1nOr8.png Should I understand that I need to install whatsoever operating system that was running on the old "linux server"/"networker client" to be able to restore 26Mb of files? thanks

    Read the article

  • ASA and cisco vs NSA sonic firewall

    - by Lbaker101
    Currently I’m trying to structure our network to fully support and be redundant with BGP/Multi homing. Our current company size is 40 employees but the major part of that is our Development department. We are a software company and continued connection to the internet is a requirement as 90% of work stops when the net goes down. The only thing hosted on site (that needs to remain up) is our exchange server. Right now i'm faced with 2 different directions and was wondering if I could get your opinions on this. We will have 2 ISPs that are both 20meg up/down and dedicated fiber (so 40megs combined). This is handed off as an Ethernet cable into our server room. ISP#1 first digital ISP#2 CenturyLink we currently have 2x ASA5505s but the 2nd one is not in use. It was there to be a failover and it just needs the security+ license to be matched with the primary device. But this depends on the network structure. I have been looking into the hardware that would be required to be fully redundant and I found that we will either of the following. 2x Cisco 2921+ series routers with failover licenses. They will go in front of the ASAs and either connects in a failover state or 1 ISP into each of the 2921 series routers and then 1 line into each of the ASAs (thus all 4 hardware components will be used actively). So 2x Cisco 2921+ series routers 2x Cisco ASA5505 firewalls The other route 2x SonicWalls NSA2400MX series. 1 primary and the secondary will be in a failover state. This will remove the ASAs from the network and be about 2k cheaper than the cisco route. This also brings down the points of failure because it’s just the 2x sonicwalls It will also allow us to scale all the way up to 200-400 users (depending on their configuration). This also makes so the Sonic walls. So the real question is with the added functionality ect of the sonicwall is there a point in paying so much more to stay the cisco route? Thanks!

    Read the article

  • Server 2008/Windows 7/Samba Unspecified error 80004005

    - by ancillary
    I have a Samba share on a LAN with 2008 PDC/DNS. Smb authenticates with AD and I have several Win7 Machines that can connect fine. I recently added a couple of new computers to the LAN which were imaged the same way (same software, etc.; different hardware so different drivers) as the other machines and they have the same policies set. I can not get the new machines to connect to the samba share no matter what. I am always met with either Unspecified Error 0x80004005 or Network Path not found. I've turned off the firewall; set LANMAN auth to respond to NTLM only/send LM & NTLM responses/use NTLM session security if negotiated in Local Sec Policy SEcurity Options; tried both ip and hostname to connect. SMB log shows that authentication succeeds; but then connection is immediately killed by the client. tcpdump shows nothing remarkable except that when trying to connect from the client via hostname there is an unknown packet type error: ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) Here's a couple of lines from that error: 11:18:37.964991 IP 001-client.domain.local.49372 > smb.domain.local.netbios-ssn: P 1670:2146(476) ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) [000] AA 46 96 FA D5 99 33 75 0C C4 20 CE 26 42 F3 61 \252F\226\372\325\2313u \014\304 \316&B\363a [010] F0 8C FB 65 18 17 40 A5 DB 42 BB 94 37 53 92 EC \360\214\373e\030\027@\245 \333B\273\2247S\222\354 [020] 55 98 7F C4 AE 3D 6B 10 C4 U\230\177\304\256=k\020 \304 11:18:37.964998 IP smb.domain.local.netbios-ssn > 001-client.domain.local.49372: . ack 2146 win 100 Here's smb.conf just in case (though don't see how if other machines are working fine): [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL server string = domain|smb share interfaces = eth1 security = ADS password server = 192.168.1.3 log level = 2 log file = /var/log/samba/%m.log smb ports = 139 strict locking = no load printers = No local master = No domain master = No wins server = 192.168.1.3 wins support = Yes idmap uid = 500-10000000 idmap gid = 500-10000000 winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes [samba-share1] comment = SMB Share path = /home/share/smb/ valid users = @"MYDOMAIN+Domain Users" admin users = @"MYDOMAIN+Domain Admins" guest ok = no read only = No create mask = 0765 force directory mode = 0777 Any ideas what else I could try or look for? Or what might be the problem? Thanks.

    Read the article

  • Batch file to uninstall all Sun Java versions?

    - by Ricket
    I'm setting up a system to keep Java in our office up to date. Everyone has all different versions of Java, many of them old and insecure, and some dating back as far as 1.4. I have a System Center Essentials server which can push out and silently run a .msi file, and I've already tested that it can install the latest Java. But old versions (such as 1.4) aren't removed by the installer, so I need to uninstall them. Everyone is running Windows XP. The neat coincidence is that Sun just got bought by Oracle and Oracle has now changed all the instances of "Sun" to "Oracle" in Java. So, I can conveniently not have to worry about uninstalling the latest Java, because I can just do a search and uninstall all Sun Java programs. I found the following batch script on a forum post which looked promising: @echo off & cls Rem List all Installation subkeys from uninstall key. echo Searching Registry for Java Installs for /f %%I in ('reg query HKLM\SOFTWARE\microsoft\windows\currentversion\uninstall') do echo %%I | find "{" > nul && call :All-Installations %%I echo Search Complete.. goto :EOF :All-Installations Rem Filter out all but the Sun Installations for /f "tokens=2*" %%T in ('reg query %1 /v Publisher 2^> nul') do echo %%U | find "Sun" > nul && call :Sun-Installations %1 goto :EOF :Sun-Installations Rem Filter out all but the Sun-Java Installations. Note the tilda + n, which drops all the subkeys from the path for /f "tokens=2*" %%T in ('reg query %1 /v DisplayName 2^> nul') do echo . Uninstalling - %%U: | find "Java" && call :Sun-Java-Installs %~n1 goto :EOF :Sun-Java-Installs Rem Run Uninstaller for the installation MsiExec.exe /x%1 /qb echo . Uninstall Complete, Resuming Search.. goto :EOF However, when I run the script, I get the following output: Searching Registry for Java Installs 'DEV_24x6' is not recognized as an internal or external command, operable program or batch file. 'SUBSYS_542214F1' is not recognized as an internal or external command, operable program or batch file. And then it appears to hang and I ctrl-c to stop it. Reading through the script, I don't understand everything, but I don't know why it is trying to run pieces of registry keys as programs. What is wrong with the batch script? How can I fix it, so that I can move on to somehow turning it into a MSI and deploying it to everyone to clean up this office? Or alternatively, can you suggest a better solution or existing MSI file to do what I need? I just want to make sure to get all the old versions of Java off of everyone's computers, since I've heard of exploits that cause web pages to load using old versions of Java and I want to avoid those.

    Read the article

  • Bypassing "Found New Hardware Wizard" / Setting Windows to Install Drivers Automatically

    - by Synetech inc.
    Hi, My motherboard finally died after the better part of a decade, so I bought a used system. I put my old hard-drive and sound-card in the new system, and connected my old keyboard and mouse (the rest of the components—CPU, RAM, mobo, video card—are from the new system). I knew beforehand that it would be a challenge to get Windows to boot and install drivers for the new hardware (particularly since the foundational components are new), but I am completely unable to even attempt to get through the work of installing drivers for things like the video card because the keyboard and mouse won't work (they do work, in the BIOS screen, in DOS mode, in Windows 7, in XP's boot menu, etc., just not in Windows XP itself). Whenever I try to boot XP (in normal or safe mode), I get a bunch of balloons popping up for all the new hardware detected, and a New Hardware Found Wizard for Processor (obviously it has to install drivers for the lowest-level components on up). Unfortunately I cannot click Next since the keyboard and mouse won't work yet because the motherboard drivers (for the PS/2 or USB ports) are not yet installed. I even tried a serial mouse, but to no avail—again, it does work in DOS, 7, etc., but not XP because it doesn't have the serial port driver installed. I tried mounting the SOFTWARE and SYSTEM hives under Windows 7 in order to manually set the "unsigned drivers warning" to ignore (using both of the driver-signing policy settings that I found references to). That didn't work; I still get the wizard. They are not even fancy, proprietary, third-party, or unsigned drivers. They are drivers that come with Windows—as the drivers for CPU, RAM, IDE controller, etc. tend to be. And the keyboard and mouse drivers are the generic ones at that (but like I said, those are irrelevant since the drivers for the ports that they are connected to are not yet installed). Obviously at some point in time over the past several years, a setting got changed to make Windows always prompt me when it detects new hardware. (It was also configured to show the Shutdown Event Tracker on abnormal shutdowns, so I had to turn that off so that I could even see the desktop.) Oh, and I tried deleting all of the PNF files so that they get regenerated, but that too did not help. Does anyone know how I can reset Windows to at least try to automatically install drivers for new hardware before prompting me if it fails? Conversely, does anyone know how exactly one turns off automatic driver installation (and prompt with the wizard)? Thanks a lot.

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • DLINK WBR-1310B Wireless Router seems to hang...

    - by Ira Baxter
    I have a brand new DLINK-1310B Wireless Router (box never before opened, although I bought it at the neighborhood computer junk store). I am using it at home (and in fact am using it at this instant from a wireless laptop). When operative, I can ping it at 192.168.0.1, and I can log into it from the PC attached to it by LAN and from the wireless PC at //192.168.0.1. In the course of the day since I've installed, it seems to have locked up 3 times. Each time the symptoms are my web browser (or other IP service, e.g., POP3) stops with a "No internet connection" error. Attempts to contact the router via 192.168.0.1 get no reaction, from either the wireless laptop or from the hardwired PC sitting next to it. It doesn't respond to pings to that address either. Power cycling the router fixes it. I've seen discussion in other questions about aging cheap electronics. Its too new to be aged. Anybody else seen this behavior with a DLINK-1310? Or do I just need to exchange it for another and try again? (I hate rolling dice, I bought the DLINK because a previous Linksys died of apparant heating problems, how many do I have to cycle through before I get something that works and is long-term stable?). Remarkably, nobody talks about how much software is in a router. Is the stuff just buggy? EDIT: Happened again, while I was working on the wireless Vista laptop. (Seems like once an hour?) I was a little more careful this time. The wireless laptop can ping it. It can't get the login screen. I visited the LAN-connected PC (takes me a minute to walk from the laptop to the PC at the other end of the house), and attempted to visit a random web page. Surprise, that worked! And, now, after a minute walking back to the laptop, I can reconnect the wireless laptop, and get to the login page from it. Strange the time/date has been reset back to 2002. (I'll swear I set it and saved the system configuration after updating the firmware; it made me redo every other bit of reconfiguration again). Is there something funny about wireless leases expiring? The router says the leases it is handing out are good for 180 minutes, and the delay-to-inaccessible was only about an hour. The DSL connection seems to have a 10 minute lease.

    Read the article

  • Hard freeze on new computer

    - by mphair
    OCZ Gold 3x2GB 240-Pin DDR3 SDRAM PC312800 Palit NE5T240SFHD01 GeForce GT240 1GB 128-bit GDDR5 ASUS P7P55D-E LGA 1156 P55 SATA 6Gb/s USB 3.0 ATX Intel Motherboard Intel Core i7-860 Lynnfield 2.8GHz 8MB L3 Cache LGA 1156 95W Quad-Core Processor SAMSUNG 22X Optical drive (DVD+-R/RW) CORSAIR CMPSU-620HX 620W ATX12V V2.2 Windows 7 Ultimate x64 Brand new system (got it from newegg two days ago) and it booted up and installed windows and ran for a day just fine. Yesterday, I boot it up in the afternoon and run various games at full graphics for most of the day. I turn on WoW and play for a few hours and it hard stalls. No numlock switching, no mouse feedback but nothing going wrong on the screen. No BSOD. I wait a bit to see if the stall is just a temporary one, but then force shutdown the computer. Upon reboot, everything seems fine, windows sees that it didn't shut down properly but I go into normal boot and restart WoW. I'm able to load it up and start running around when it freezes again. This time when I restart, it doesn't even get to BIOS. It starts (power goes on) and it just hangs with no output to the monitors. I shut it off and went to bed. This morning, I turned it on and went into BIOS setup. I'm not terribly experienced with messing with BIOS settings but I checked over them the best I could. Everything seems fine so I boot into windows and browse the internet for a bit looking for a solution, hard freeze within 10 minutes. I restart and go into BIOS and check the CPU temperature, 40c. I'm kinda stumped here. Some people say it might be a memory issue, but why would it take so long for it to come up? Could it have been slowly accessing one memory stick at a time and then it just got to a bad one and that's what is causing it to fail? It seems odd that I don't get a BSOD from a hardware failure. Having the screen just halt with no input or output change seems like a software thing to me. Any thoughts?

    Read the article

  • Seeking (somewhat) better explanations about supporting > 2.1 TB hard drives.

    - by irrational John
    Today while Googling about I stumbled across posts claiming that Seagate plans to ship a 3TB drive sometime later in 2010. Unfortunately, the stuff I looked at all seemed to contain tidbits of info which I didn't think fit together properly. (I would link to some examples, but I'm only allowed 1 link per post at the moment). Now I really don't have any "need" to better understand the underlying tedious details of this. I am just curious. And confused. So ... some questions I'm hoping someone better informed than I might answer. The talk about a potential addressing problem in both the hardware and the software confused me. The assertion is that something called something called Long LBA addressing (LLBA) is needed in the Command Descriptor Block as a way to get around the current limits to access a hard drive bigger than ~2.1 (or ~2.2?) TB. OK, fine. But I thought the last time this problem came up it was solved by extending the length of the LBA field from 28 to 48 bits. (Remember this website? www.48bitlba.com) A 6 byte LBA is clearly large enough, so what's up with this LLBA talk. I thought this was all fixed back by Win XP SP2, if not sooner? And certainly all the hardware should be up to the task, shouldn't it? The real problem as I understand it with drives much bigger than 2 TB are the 4 byte LBA fields in the Master Boot Record (MBR) used to partition just about all hard drives at the moment. The most likely solution is to migrate to Intel's GUID Partition Table (GPT). A GPT uses 8 byte fields for the LBA. What I don't understand in this context is what is the problem with booting say Windows from a 3TB drive that uses a GPT. Granted, the current PC BIOS wouldn't know how to recognize or work with a GPT. But every GPT comes with a so-called "Safety" or "Guarding" MBR in sector 0.Apple already uses a hybrid version of the MBR to allow them to boot Windows on their Intel Macs (aka Boot Camp). Couldn't something similar be done to allow the PC BIOS to recognize and boot from a partition in, say, the first 1 GB of a 3GB or larger drive? I've got more questions such as where do 4K sectors fit into all of this. But it's probably time I just shut up and posted this. ;-) -irrational john

    Read the article

  • unable to install anything on ubuntu 9.10 with aptitude

    - by Srisa
    Hello, Earlier i could install software by using the 'sudo aptitude install ' command. Today when i tried to install rkhunter i am getting errors. It is not just rkhunter, i am not able to install anything. Here is the text output: user@server:~$ sudo aptitude install rkhunter ................ ................ 20% [3 rkhunter 947/271kB 0%] Get:4 http://archive.ubuntu.com karmic/universe unhide 20080519-4 [832kB] 40% [4 unhide 2955/832kB 0%] 100% [Working] Fetched 1394kB in 1s (825kB/s) Preconfiguring packages ... Selecting previously deselected package lsof. (Reading database ... ................ (Reading database ... 95% (Reading database ... 100% (Reading database ... 20076 files and directories currently installed.) Unpacking lsof (from .../lsof_4.81.dfsg.1-1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/lsof_4.81.dfsg.1-1_amd64.deb (--unpack): unable to create `/usr/bin/lsof.dpkg-new' (while processing `./usr/bin/lsof'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Selecting previously deselected package libmd5-perl. Unpacking libmd5-perl (from .../libmd5-perl_2.03-1_all.deb) ... Selecting previously deselected package rkhunter. Unpacking rkhunter (from .../rkhunter_1.3.4-5_all.deb) ... dpkg: error processing /var/cache/apt/archives/rkhunter_1.3.4-5_all.deb (--unpack): unable to create `/usr/bin/rkhunter.dpkg-new' (while processing `./usr/bin/rkhunter'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Selecting previously deselected package unhide. Unpacking unhide (from .../unhide_20080519-4_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/unhide_20080519-4_amd64.deb (--unpack): unable to create `/usr/sbin/unhide-posix.dpkg-new' (while processing `./usr/sbin/unhide-posix'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/lsof_4.81.dfsg.1-1_amd64.deb /var/cache/apt/archives/rkhunter_1.3.4-5_all.deb /var/cache/apt/archives/unhide_20080519-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up libmd5-perl (2.03-1) ... Building dependency tree... 0% Building dependency tree... 50% Building dependency tree... 50% Building dependency tree Reading state information... 0% ........... .................... I have removed some lines to reduce the text. All the error messages are in here though. My experience with linux is limited and i am not sure what the problem is or how it is to be resolved. Thanks.

    Read the article

  • Cannot change default application association of certain file types

    - by H.B.
    After associating my MP3 files with MPlayer i can no longer change that association using the Choose default program... dialogue, the Always use this [...] Checkbox is always greyed out (Control Panel > Default Programs > Associate a file type or protocol with a program does not let me change it either). That also happened for MP4s but not for MKVs for example, and if i associate my MP3s with other applications like VLC it does not get blocked. I would really like to know why that is and if i can avoid this beforehand (thankfully i know ways to fix it afterwards already). Edit: Another obervation: The blocking programs (i managed to block it with an association to Visual Studio as well) do not appear in the Recommended Programs of the open-with-dialogue (And the explorer said: "The current program is not recommended, but i won't let you change it, ha!"). Edit: A screenshot as requested: As you can see on the top left (if you know the icon of MPlayer), the file is currently associated with MPlayer. Edit: Ways to fix it (Note: This question is not about fixing it) Using the Default Programs Control Panel > Default Programs > Set Default Programs, select WMP, Choose defaults for this program, check .mp3 This should reassociate the files with WMP and you can create a new association in the explorer. Using the registry (As always, keep your hands off it unless you know what you are doing or if you are fine with accidentally breaking your system) HKEY_CURRENT_USER > Software > Microsoft > Windows > CurrentVersion > Explorer > FileExts > .mp3 Here you could for example clean up the open-with-list, and the current default program seems to be saved here as well in the key UserChoice, there you can change the ProgId string to another application, you can associate it with WMP by entering WMP11.AssocFile.MP3 or just pick another application right away. You may need to mess with permissions on the key though, if you cannot change the ProgId value.

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Wirelss card not being detected in backtrack 5

    - by Jesse Nelson
    I just installed backtrack 5 and I am unable to detect my wireless card. iwconfig doesn't list my interface. I can see that the hardware is present in lspci -vnn (see below) but I can't get the interface detected. I have tried to reinstall the compat-wireless package but I get errors during the build (see below) I have done a ton of researching and I keep hitting a brick wall, mostly because the wiki for backtrack is down and I can't find any good resources. Does anyone know how to fix the issue? Also, does anyone no how I can scan the hardware to determine what NIC is assigning my interface? If I can figure out the interface name I think I can set it up manually by putting up the link and using wireless-tools to manually configure the connection, this is what I had to do in arch on my mac. As stated the wiki for backtrack is down and I can't find any help on the issue. I tried to do the full kernel upgrade suggested in my software update but after the update was complete and I logged back in I had a new log in manager and the only thing I was able to log into was window managers. However, after this update my wireless was working fine. Please help I am new to Linux and the wiki is down, I have nowhere else to turn. Forgot to mention I am using the KDE version, not Gnome. Thanks in advance for any help or support. Attempt at make: root@bt:/usr/src/compat-wireless-3.3-rc1-2# make /usr/src/compat-wireless-3.3-rc1-2/config.mk:254: "WARNING: CONFIG_CFG80211_WEXT will be deactivated or not working because kernel was compiled with CONFIG_WIRELESS_EXT=n. Tools using wext interface like iwconfig will not work. To activate it build your kernel e.g. with CONFIG_LIBIPW=m." make -C /lib/modules/2.6.38/build M=/usr/src/compat-wireless-3.3-rc1-2 modules make: *** /lib/modules/2.6.38/build: No such file or directory. Stop. make: *** [modules] Error 2 lspci output: root@bt:/usr/src/compat-wireless-3.3-rc1-2# lspci -vnn -i net lspci: I/O error at net, line 0 root@bt:/usr/src/compat-wireless-3.3-rc1-2# lspci -vnn 02:00.0 Network controller [0280]: Atheros Communications Inc. Device [168c:0032] (rev ff) (prog-if ff) !!! Unknown header type 7f ( This is the problem but I can't find the solution) Kernel modules: ath9k iwconfig output: root@bt:/usr/src/compat-wireless-3.3-rc1-2# iwconfig lo no wireless extensions. eth0 no wireless extensions.

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • Computer freezes +/- 30 seconds, suspicion on SSD

    - by Robert vE
    My computer freezes for about 30 seconds, this happens occasionally. When it happens I can still move the mouse, sometimes even alternate between tabs in google chrome. If I try to open windows explorer nothing happens. Also chrome rapports "waiting for cache". It also happens in starcraft II, during which the sounds loops. I have made a Trace as this topic describes: How do I troubleshoot a Windows 7 freeze or slowness? Trace: https://docs.google.com/open?id=0B_VkKdh535p6NklhSDdBLURUMnc I have looked at it, but I couldn't figure it out. My system specs are: AMD Athlon X4 651 Asus Ati HD6670 ADATA SSD sp900 Asus f1a55 mainboard 4 GB crucial 1333 ram 500 watt atx ps I'm running Windows 7, fully updated. Any help is much appreciated. Update: I tried something before your reply that may have helped the problem. I don't know for sure if it has, it's too soon to tell. A bit of history first. I had problems installing win7 on my ssd from the start. In IDE mode it worked, but I had the same problems as above. AHCI was a total fail, with it on before install as well as turning it on after install (including tweaking register). I didn't bother installing the AMD chipset/AHCI as it was reported to have no TRIM function and thus make problems worse. Eventually I did install the AMD SMbus driver as the stability issues were driving me crazy. It worked, no more issues, until I installed some extra drivers and software. Audio/LAN/ASUS suite, I don’t see the relation, but somehow it screwed up my system again. As a last effort I posted here on this site. After which the thought occurred to me turn on AHCI again as by now I had all necessary drivers installed anyway. (plus all windows updates downloaded/installed in the meantime) I did and stability didn’t seem great the first few reboots, but eventually everything seemed to work great. I tried to play starcraft II – an almost guaranteed freeze before – and I had no problems. I’m basically crossing my fingers and hope the problem is gone for good. I still think it has something to do with my SSD. In my research into the problem I noticed a lot of these issues with sandforce 2281 firmware, the exact same firmware I have. People reported the same problem that I had, freezes. Additionally they reported that during a freeze the hdd light stayed on, I noticed after I read this that this happened with my computer as well. None of this is conclusive evidence that my SSD is really to fault, but it is suspicious. And why turning on AHCI would fix it I don’t know. Thank you Tom for taking a look, if the problem returns I will certainly do what you advised.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote hot me too

    - by dgerman
    See similar: Out of nowhere, ssh_exchange_identification: Connection closed by remote host Today, 6/19/12 attempting to ssh to the same host as usual ssh replied ssh_exchange_identification: Connection closed by remote host two additional attempts failed ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host ping host was successful, ftp host was successful, ssh now successful, ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'real-world-systems.com' is known and matches the RSA host key. debug1: Found key in /Users/dgerman/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dgerman/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/dgerman/.ssh/id_dsa debug1: Next authentication method: password ++++ What gives?? +++++++++++ Mac OS X 10.4.7 , OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011, /Users/dgerman/.ssh > ls -la total 24 drwx------ 7 dgerman staff 238 Jun 19 15:46 . drwxr-xr-x 389 dgerman staff 13226 Jun 19 15:46 .. -rw------- 1 dgerman staff 1766 Feb 26 18:25 id_rsa -rw-r--r-- 1 dgerman staff 400 Feb 26 18:25 id_rsa.pub -rw-r--r-- 1 dgerman staff 67 Feb 26 18:27 keyfingerprint -rw-r--r-- 1 dgerman staff 6215 May 1 08:11 known_hosts -rw-r--r-- 1 dgerman staff 220 Feb 26 18:26 randomart

    Read the article

  • Cannot Send Item error in Outlook - permissions to registry?

    - by Tim Alexander
    The issue I am trying to solve is to do with users getting a Cannot Send Item error in Outlook 2007 connecting to Exchange 2007. Basically if there is an image in the email (either one they have pasted in or one from another email in the chain) they get a "Cannot Send Item" error. Initially thought it was a citrix issue but users get it when they RDP to a server as well. Changing the message to Rich Text works 80% of the time but I do not think this is a solution but more of a temporary workaround. After some troubleshooting we found that the error can be fixed by adding the user as a member of the local power users group. of course this is not really a fix. My thoughts were that the ability of a power user to add/remove software may give them more access to the registry which might allow them to get round a restriction that is in place for a normal user. I have tried going through a procmon but the wealth of information is confusing. It initially looked like it may be an Outlook 2007 email security setting but this does not change between power user and normal user (set to 1 in the registry, "Use the security setting from Outlook Security Settings Public Folders"). I am struggling to fine tune my troubleshooting to work out exactly what is blocking it. Has anyone had an experience with an error similar to this? Or are there any tips for trying to track down issues via procmon as I must admit my approach seems somewhat lacking :) EDIT: So I have trawled through the two logs we have from process monitor (one as a power user and one a normal user). annoyingly I can find no obvious difference where something is denied access. There are more access denied events in the normal user log but these are quickly followed by sucessful entries to the same path fractions of a second later. The only thing that does stand out is an access denied to HKCR.html. This does not even appear in the power user version of the log. From what I understand this helps determine the default browser which ties in nicely with the fact that 9 out of 10 times you can send the message as Rich Text. EDIT: Looks like KB2509470 was causing the issue. Not really sure why but when I can work out what it does and why it causes the problem will post here unless anyone beats me to it!

    Read the article

  • force unattended install php apt debian squeeze

    - by user1258619
    i am trying to do an unattended install via php for several packages but every time when the dependencies come up it aborts instead of forcing the answer to be yes. (i have broken apt a few times...) each time though i start off re-imaging my vps(testing server) so there isn't an issue of something still being hung or crashed.can someone tell me what i am doing wrong? keep in mind this is the 12th version of this script to get nowhere. fwrite(STDOUT, "Root Password:\n"); $root_pass = chop(fgets(STDIN)); $file_apt = '/etc/apt/apt.conf.d/70debconf'; // Open the file to get existing content $current_apt = file_get_contents($file_apt); // Append a new person to the file $current_apt .= "Dpkg::Options {\"--force-confold\";};\n"; // Write the contents back to the file file_put_contents($file_apt, $current_apt); $update = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive sudo -S apt-get update'); echo $update; $update_upgrade = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive sudo -s apt-get upgrade'); echo $update_upgrade; $install_unattended_mysql = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive apt-get install --yes --force-yes mysql-server'); echo $install_unattended_mysql; $install_mysql_set_password = shell_exec('mysql -u root -e "UPDATE mysql.user SET password=PASSWORD("'.$root_pass.'") WHERE user="root"; FLUSH PRIVILEGES;'); echo $install_mysql_set_password; i have read a few places that i needed to edit the apt.conf file so i am doing so here and doing an update and an upgrade. also the upgrade does abort when it actually has to install something. The following packages will be upgraded: apache2 apache2-doc apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common base-files bind9 bind9-host bind9utils debian-archive-keyring dpkg dselect libbind9-60 libc-bin libc6 libdns69 libisc62 libisccc60 libisccfg62 liblwres60 locales 22 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 18.4 MB of archives. After this operation, 8192 B of additional disk space will be used. Do you want to continue [Y/n]? Abort. I also should note that only a few pieces of software are going to be installed from the apt repo's as i will include some binaries to go along with it.

    Read the article

< Previous Page | 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209  | Next Page >