Search Results

Search found 5564 results on 223 pages for 'rollover effect'.

Page 163/223 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • How can something relevant to graphics completely kill a motherboard?

    - by leladax
    I was coding something in OpenGL and after a bug there was an 'OS slowed down' situation. After a few seconds the screen went blank and the laptop shutdown. Now not even a led turns on battery or not. It doesn't appear to be the AC or the battery since there was some battery when it died and when it's connected to the AC the laptop produces near the AC connection a very slight 'clicking' noise (very faint, one has to be very careful to notice it, I don't know if it was there forever tbh). I suspect the motherboard died, as in something from the point it gets AC or battery power and the point it actually feeds itself. But I can't figure out how that effect was produced by the OpenGL bug or graphics overheating. If the graphics died alone, it should at least give some indication that the laptop is barely alive, at least a led, a sound, anything, the laptop is instead completely dead (other than faint 'clicking' I mentioned). Does anyone have expert advice on this? I'm especially interested in any ideas connected to "graphics overheated/bugged ---- they killed motherboard". I have a very lengthy experience in that stuff as a hobbyist and it really puzzles me. It's not just a "AC died" situation I can easily google.

    Read the article

  • Spotlight has stopped indexing/returning anything in /Applications

    - by pra
    After a recent kernel panic & restart, Spotlight no longer seems to know anything about the files under my /Applications folder. I used to launch Safari.app, Opera.app, Textedit.app, etc via Spotlight as a matter of routine. Now, I get "No results found" for all of them (except Textedit.app, which launches a demo text editor from a Qt installation). The programs are still there & still launch directly from Finder. I've already run disk utility & verified the disk, no issues. I repaired disk permissions, which made several changes, but to no effect. Is there anything else I can do, short of re-installing MacOS? Update: I already verified that "Applications" was still checked in my Spotlight preferences. It was still returning applications located elsewhere (the Qt textedit sample app), so that shouldn't have been the issue. A few hours later it resolved itself; I guess there's a background process running on some interval.

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • Motherboard Dying? AHCI Drive Init and boot loop intermittent failure

    - by Adam Heath
    My computer is now intermittently failing to boot up. For the last couple of days, when I turn it on it hangs on "AHCI Drive Init...", and when powered off and on again, it booted up fine. Today, it did the same but failed in a few other ways too, seemingly at random: Hangs on "AHCI Drive Init..." Boot loop (after "AHCI Drive Init..." appears for a split second (no drives listed)) Black screen (after "AHCI Drive Init..." appears for a split second, a black screen with all fans still running) The interesting part is that the above is not affected by what drives are connected, or what to. I have tried both disks, each disk individually and no disks (along with trying the primary and secondary SATA controllers), none of this has any effect on what happens. After about 20+ attempts of different combinations, it suddenly decided it would boot up into Windows, and I hadn't touched anything for about 2 cycles. Motherboard: Gigabyte GA-870A-USB3 Processor: Amd Phoenom II x6 1090T RAM: 8GB Corsair 1600 Primary Disk: Plextor 128GB SSD Secondary Disk: Western Digital Black 1TB OS: Windows 8.1 Is this my motherboard dying? Or could something else be the cause? Thanks!

    Read the article

  • Want to send my neighbors to a certain website via DNS, but don't have a clue how. [closed]

    - by Akku
    My neighbors have an unsecured WIFI router, and over the administration web-UI of the router I could log in as there was no password set. I don't know which of my neighbors these are, and I'd like to configure their router in a way that they come to my website instead of Google and Facebook, where I set up a warning in german. It this page: http://www.abelssoft.de/liebenachbarn/ Basically, I just want to see if and how this is possible - I'm aware that I could just set the WiFi-password and have them call their network provider to reset the thing, but I really want to see if this could work, because it would be a way cooler effect :-). So this router interface doesn't allow custom redirects, only filters. BUT I can set the DNS that is used, so I thought there might be the possibility to set up a custom DNS on a server, set it as the main DNS and redirect from Google to the URL above. Is this possible? If so, please try to detail a way that I have to go though to achive this. Note that I'm not the super-Linux-skilled person, I have a dyndns account and a Windows machine it points to as well as an Apache+Tomcat if that helps. I could also set up virtual machines on the windows server and redirect to those using a different port. Or is there maybe a webservice that provides such custom DNS?

    Read the article

  • How can I install iTunes in such a way that it can't put any "hooks" or helper programs on my computer?

    - by Joshua Carmody
    I'm buying a new iPad, which means I must once again install iTunes. I've not used iTunes in more than 6 months, since I bought a new computer. I don't like iTunes, but I can live with using it to buy/manage media and sync my Apple devices when the program is open. What I would like to do though, is find a way to install iTunes in such a way that it has absolutely no effect on my system when it is closed. iTunes normally installs several helper programs such as iTunesHelper.exe, and the Bonjour service. These programs run in the background when iTunes is closed. You can force-close them, or remove them from your setup files, but iTunes will often put them right back when you run it. I know these programs are mostly harmless, but they have at times caused issues such as iTunes spending system resources trying to catalog media files or drives connected to VPN, or other issues. At best they're just one more small background process eating up a small piece of my CPU time and RAM. How can I run iTunes without letting it get it's "hooks" into my system? One thought I had is that I could create a Windows user account just for iTunes, and deny it admin privileges. Then if I installed iTunes using that account maybe anything it installed wouldn't affect the "main" account on my PC? But I'm not sure if that would work.... Failing that, maybe some kind of virtualization software or sandbox I could install it in? I'm open to any suggestions. My system is an Intel-based PC running Windows 7 Professional 64-bit. Thanks!

    Read the article

  • How I can fix the "Display Driver has stopped responding and has recovered"?

    - by Vitor Rangel
    I'm using a GeForce GTX 580, with Windows 7 64-bit. The driver version of the GTX is 301.42. The problem happens after a few minutes, when I'm playing specifc games. It won't happen in all games - And I don't have any idea why these games doesn't work. The games that doesn't work: Battlefield 3, Civilization V, Sniper Elite V2. The games that work: Mass Effect 3, Crysis 2, Team Fortress 2, Left 4 Dead 2, Skyrim, L.A. Noire. As you can see, it's not a problem of "The games that demand more stop working". I've tryed updating the driver of the graphics-card, the bios of the motherboard, even formated my computer (It was needing it) and instaled every driver in the last version possible. This problem happens since I bought my graphics-card, 6 months ago. After a few minutes, from 10 to 20, the pixels in the monitor become strange, with random colors and effects, like it was broken. Then, everything goes black, and the message appears "Display Driver has stopped responding and has recovered". After that, I need to close the game and start again. I am not overclocking, and my temperature never goes higher than 70ºC.

    Read the article

  • Logitech Performance MX Mouse Jumps on OS X Lion (10.7.4)

    - by Adam Thompson
    I have a Logitech MX Revolution wireless mouse that I am trying to use with OS X Lion. Everything is working except for one problem... there is a small, but quite noticeable, jump when the mouse cursor is moved. The problem is mostly prevalent when dragging and dropping files or trying to highlight items. It makes performing any task with the mouse accurately next to impossible. I did quite a bit of looking and found that all kinds of people have had mouse issues with OS X. I've tried all of the following with absolutely no success: Using the official drivers from Logitech (these performed worse than the default mouse drivers in OS X) Using SteerMouse as a third party mouse driver. This worked ever so slightly better than the default driver, but still suffered quite frequently from the skipping problem Cleaning the sensor on the mouse and ensuring it's not the result of the surface that it's being used on. Tested the mouse on a Windows machine. The mouse worked absolutely flawlessly on the other machine. Changed the channel that my wireless router operates on by the off chance my problems were the result of interference. This also had no effect. I can't think of anything else that could possibly interfere with the mouse. I'm am out of ideas on what to try, so I would really appreciate if anyone has any suggestions. I should also mention that an old wired mouse I had laying around worked just fine when I plugged it in. This really isn't the best solution, however, as I really prefer the MX Revolution.

    Read the article

  • Conditionally set an Apache environment variable

    - by Tom McCarthy
    I would like to conditionally set the value of an Apache2 environment variable and assign a default value if one of the conditions is not met. This example if a simplification of what I'm trying to do but, in effect, if the subdomain portion of the host name is hr, finance or marketing I want to set an environment var named REQUEST_TYPE to 2, 3 or 4 respectively. Otherwise it should be 1. I tried the following configuration in httpd.conf: <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com DocumentRoot /var/www/html SetEnv REQUEST_TYPE 1 SetEnvIfNoCase Host ^hr\. REQUEST_TYPE=2 SetEnvIfNoCase Host ^finance\. REQUEST_TYPE=3 SetEnvIfNoCase Host ^marketing\. REQUEST_TYPE=4 </VirtualHost> However, the variable is always assigned a value of 1. The only way I have so far been able get it to work is to replace: SetEnv REQUEST_TYPE 1 with a regular expression containing a negative lookahead: SetEnvIfNoCase Host ^(?!hr.|finance.|marketing.) REQUEST_TYPE=1 Is there a better way to assign the default value of 1? As I add more subdomain conditions the regular expression could get ugly. Also, if I want to allow another request attribute to affect the REQUEST_TYPE (e.g. if Remote_Addr = 192.168.1.[100-150] then REQUEST_TYPE = 5) then my current method of assigning a default value (i.e. using the regular expression with a negative lookahead) probaby won't work.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Sending e-mail on behalf of our customer(s), with Postfix

    - by NathanE
    We send e-mail on behalf of our customers, via our own SMTP services. It's always been a problem for us because usually our "spoofing" of their source address results in the mails being caught in spam traps. This hasn't been a problem in the past due to the small volume and low importance of these mails that we sent. However this requirement has recently changed and we need to fix this issue. We realise that fundamentally our application is sending e-mail incorrectly, as per this post: Send email on behalf of clients However, we would like to resolve the problem at the SMTP server level. We have deployed a server running Postfix. Is it possible to have Postfix automatically adjust the mail headers so that we get this "Sent on behalf of" behaviour? I figure it should just be a case of Postfix noticing that the FROM address is the spoofed (i.e. a domain that is not mentioned in its config anywhere) and therefore inject/replace the appropriate headers to get the desired effect. Thanks.

    Read the article

  • How can I "filter" postfix-generated bounce messages?

    - by Flimzy
    We are using postfix 2.7 and custom SMTPD (based on qpsmtpd) in highly customized configuration for spam filtering. We have a new requirement to filter postfix-generated bounces through our custom qpsmtpd process (not so much for content filtering, but to process these bounces accordingly). Our current configuration looks (in part) like this: main.cf (only customizations shown): 2526 inet n - - - 0 cleanup pickup fifo n - - 60 1 pickup -o content_filter=smtp:127.0.0.2 Our smtpd injects messages to postfix on port 2526, by speaking directly to the cleanup daemon. And the custom pickup command instructs postfix to hand off all locally-generated mail (from cron, nagios, or other custom scripts) to our custom smtpd. The problem is that this configuration does not affect postfix generated bounce messages, since they do not go through the pickup daemon. I have tried adding the same content_filter option to the bounce daemon commands, but it does not seem to have any effect: bounce unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 defer unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 trace unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 For reference, here is my main.cf file, as well: biff = no # TLS parameters smtpd_tls_loglevel = 0 smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache smtp_tls_security_level = may mydestination = $myhostname alias_maps = proxy:pgsql:/etc/postfix/dc-aliases.cf transport_maps = proxy:pgsql:/etc/postfix/dc-transport.cf # This is enforced on incoming mail by QPSMTPD, so this is simply # the upper possible bound (also enforced in defaults.pl) message_size_limit = 262144000 mailbox_size_limit = 0 # We do our own message expiration, but if we set this to 0, then postfix # will try each mail delivery only once, so instead we set it to 100 days # (which is the max postfix seems to support) maximal_queue_lifetime = 100d hash_queue_depth = 1 hash_queue_names = deferred, defer, hold I also tried adding the internal_mail_filter_classes option to main.cf, but also tono affect: internal_mail_filter_classes = bounce,notify I am open to any suggestions, including handling our current content-filtering-loop in a different way. If it's not clear what I'm asking, please let me know, and I can try to clarify.

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    Hi All, I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: http://superuser.com/questions/12955/how-do-i-remove-the-option-to-eject-sata-drives-from-the-windows-7-tray-icon

    Read the article

  • Why does my microwave kill the Wi-Fi?

    - by Ohlin
    Every time I start the microwave in the kitchen, our home Wi-Fi stops working and all devices lose connection with our router! The kitchen and the Wi-Fi router are in opposite ends of the apartment but devices are being used a little here and there. We've been annoyed by the instability of the Wi-Fi for some time and it wasn't until recently we realized it was correlated to microwave usage. After some testing with having the microwave on and off we could narrow down the problem to only occurring when the router is in b/g/n mode and uses a set channel. If I change to b/g mode or set channel to auto then there is no problem any more...but still! The router is a Zyxel P-661HNU ("802.11n Wireless ADSL2+ 4-port Security Gateway" with latest firmware) and the microwave is made by Neff with an effect of 1000W (if this information might be useful to anyone). There is an "internet connection" light on the router and it doesn't go out when the interruption occurs so I think this is only an internal Wi-Fi issue. Now to my questions: What parts of the Wi-Fi can possibly be affected by the microwave usage? Frequency? Disturbances in the electrical system? How can setting Auto on channels make a difference? I thought the different channels were just some kind of separation system within the same frequency spectrum? Could this be a sign that the microwave is malfunctioning and slowly roasting us all at home? Is there any need to be worried? Since we were able to find router settings that cooperate well with our microwave's demand for attention, this question is mainly out of curiosity. But as most people out there...I just can't help the fact that I need to know how it's possible :-)

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem mapping G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd. [insert ref link here] We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • Ping with explicit next-hop selection (aka Monitoring multiple default gateways)

    - by Michuelnik
    I have a linux (debian) router with two internet connections (A) and (B). (A) is preferred, (B) is fallback. I want to monitor the internet connection (and not only the availability of the gateways!) and change the default route appropriately. If (A) is not providing internet, switch to (B) If (A) is providing internet again, switch back to (A). Only problem I have is in case (2). My routing table points towards a working internet so I cannot easily detect whether internet is working over link (A) again. I am search for a ping or traceroute (or other diagnosis-tool) which can select the next-hop explicitly. ping -r looks promising, but can only ping a host on the lan. (It only has to write another destination address in the packet, damnit!) traceroute -g gateway looks even more promising and nearly does what I want - but sets source routing options which my next-hops deny. (Not within my administrative boundary...) I just want a $ping, that can: select a source interface (and address) select a next-hop on that interface ping any arbitrary ip address I could do evil trickery with policy-based routing but that would have production impact for all users. I would like to see a side-effect-free solution....

    Read the article

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • In APC+PHP, how much RAM is too much? Is it okay to set apc.shm_size to many GB?

    - by Jeremy Clarke
    On our server we have a LOT of RAM for our traffic levels (16GB). The HTTP processes regularly eat up all CPU and need to be restarted without even getting close to using swap memory, so I'm looking for ways to spend RAM to ease the load on Apache (and/or help the seperate MySQL server which may be breaking Apache). I have many WordPress installs on the HTTPD instance so APC sometimes uses as much as 900MB of ram (according to the apc.php charts). Just in case I have apc.shm_size set to 1600MB which is more than it needs but not more than I can spare. This means there is usually lots of extra RAM available to APC but also very little turnover and fragmentation is never more than 1%. Is this dangerous? Should I be slimming down APC to less than 1GB just on principle? Should I be expecting some turnover within APC in the name of bringing it's overall footprint down? Having so much memory devoted to APC means that in top/htop every single httpd process shows ~1.9GB in the VIRT memory column. Obviously this is shared memory and not used per-process, but could it be hurting our server? NOTE: The problem with the server remains unclear but the effect is that about 60 times a day all 8 CPU's fill up to 100% and everything stops working until Monit sees that Apache is broken and restarts it (Monin also saves the MySQL server). I'm not sure if APC is even part of the problem but I'm trying to optimize everything just in case.

    Read the article

  • Windows 7 sharing folder from command line, selecting users and triggering the "Apply" of changes

    - by clintp
    I have a drive that doesn't get mounted until after I log in. (A Truecrypt thumbdrive device, and no, I'm not making it a "System Favorite" to get around this.) I'd like to construct a batch file to share it once I've gotten it mounted because the sharing info doesn't seem to stick through a reboot. From the GUI, I'd go into the folder Properties-Sharing. And then in Advanced Sharing I'd pick the name to share it as. And then under the "Share..." button I'd pick the users and the permissions I want to grant them. After "Apply" there's a pause -- I'm not sure what's happening here, but the dialog says "Sharing Items..." -- and then everything is okay. From the command line, I've done: net share MyFolder=F:\MyFolder cacls F:\MyFolder /G FirstUser:F cacls F:\MyFolder /G OtherUser:F And this almost works. I can see the share on the network then, but nobody has permissions to do anything. If I go into the GUI and change anything (and I can see my command-line changes in there already) and press "Apply" I get the: "Sharing Items.... This may take a few minutes" Dialog... and then Voila! It works. I get the "Your folder is shared" dialog with the command-line changes I made, along with the GUI change that I made to trigger the "Sharing Items..." dialog. Everything's peachy. Is a service being restarted? Which one? What's triggering the sharing to take effect? And -- more importantly -- how do I do it from the command line?

    Read the article

  • Windows 7 Update freezes - what to do?

    - by Tom Tom
    Hi, Yesterday I shutdown my notebook, and Windows 7 Ultimate started to install automatic updates. After one hour, I noticed that the update was still running. I thought OK, I shall go to sleep and let it run. In the morning it was still running. Thus, I thought it had crashed, forced a shutdown of the notebook and then restarted it. With the same effect that the notebook is "freezing" at "Install Update 1 of 5". It does not look like it has crashed. The progress wheel is still moving. But it does not make any progress... Would appreciate any help! Edit: OK, I was able to log-in into safe mode. This way I passed the install update screen. I do not want to generally disable updates, what can I do to not install the last update, which is creating troube. Or how can I find out whats the problem with the last update?

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >