Search Results

Search found 8782 results on 352 pages for 'restart processes'.

Page 170/352 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Shut Down took way too long because of "Background Programs"

    - by Christopher Chipps
    I tried shutting my desktop PC (with Windows7) down but after several attempts (like 4 or 5) at Start -- Shut Down, the GUI was still there and it was not shutting down. I didn't think there were any programs running when I pressed Shut Down, so I went into the taskbar (Ctrl + Alt + Del) to check out the processes. Once I did that, a screen appeared with a message stating that there are "background programs" still running and it gave me an option to "Force Shut Down" which I pressed and it shut down normally. Does anyone know why this would happen?

    Read the article

  • mysql my.cnf ignored

    - by mr12086
    [issue] I'm trying to modify a my.cnf value on my production server but the changes aren't taking effect after a sudo service mysql restart, using an exact copy of the my.cnf (downloaded and replaced original) on my development server the changes made are visible from show variables in mysql commandline. my.cnf is located at /etc/mysql/my.cnf sudo find / -name my.cnf /etc/mysql/my.cnf So only one file exists on the entire system.. Production is ubuntu 10.04 LTS 64bit Development is ubuntu 11.10 32bit Mysql versions are 5.1.61 & 5.1.62 respectively. Kind Regards, [my.cnf] yes it seems to have had all the comments removed and replaced with whitespace. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M innodb_file_per_table = 1 [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • maemo - n900 - SIP call quality

    - by Walter White
    Hi all, I have been using SIP / VoIP on my n900 to make calls and my problem is after about 15 minutes of talk time, more recently 18 minutes exactly, my connection dies and I can no longer hear them or them me. I have tested this with various VoIP providers to confirm that it is not specific to any one provider, but instead my phone. I also have tested this on my laptop. I sent my phone to be tested at some place that tests hardware and no problems were found with the hardware. What can I do to rectify the 15 minute call barrier with SIP on my phone? The other problem I have too is that for the wireless broadband to start working again, I need to restart the phone, it appears the network driver gets overloaded. The one thing that appears to work fine is making cellular calls. I have yet to have call quality drop off after 15 minutes over a cellular connection. Thanks, Walter

    Read the article

  • Is Flash typically slow on Linux?

    - by CSarnia
    Specifically, I'm running Mint 8 (Helena). I'm extremely new to Linux, and was searching for a solution that was user-friendly and GUI oriented. The box won't be used for much other than web browsing and word processing. Anyway, it runs relatively smoothly, except for Youtube videos... especially full-screen, which runs at like 1 FPS, and even after closing, slows Firefox to a crawl until I restart it. I'd seen an xkcd comic on the matter, but regarded it as a joke until now. Is this actually a problem? Are there any remedies I can try to smooth the applications?

    Read the article

  • SBS 2003 no network connection and acting strangely a bunch of Event ID 13568

    - by JMan78
    I've got an SBS 2003 Standard server and it was running fine until earlier today when it was rebooted, after the reboot it has no network connection, I can't seem to right click on a lot of stuff and get dialog boxes, I can't launch IE, it's acting extremely strange. We are dead in the water at this point. I checked the event logs and noticed we're getting a ton of Event ID's 13568. I thought it was a Journal Wrap error, and while I was going to try to fix it using this article: http://support.microsoft.com/kb/290762 I can't even do that because after I set the D4 value, then went to restart NTFRS from command prompt and I got the following: System Error 1059 has occurred. Circular service dependency was specified. That is where I'm at and haven't been able to figure anything else out. ALso, I've posted this on EE, there are some screens of event logs and such there: http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/SBS_Small_Business_Server/Q_27969593.html

    Read the article

  • Change Gnome-Panel profile at startup according to number of displays

    - by ifischer
    I'm running Ubuntu 10.04 on a Laptop. I have a startup script which enables external displays, if they are connected. It runs at GDM startup, configured in /etc/gdm/Init/Default. When i'm running without external displays, Gnome should use 2 Panels. When i'm using 2 external displays, Gnome should add an additional Panel to the second display. But this should of course be removed again if i detach the external displays (and restart). Can i configure this use case by using Gnome Panel-profiles? I read that there is a startup option "--profile" for gnome-panel, but i don't know how and where i could switch the profile, especially because this has to be done after recognizing the number of displays. Or can i add a general Gnome profile and switch between those profiles somehow to achieve this behavior?

    Read the article

  • CentOS Failover Cluster - SIOCADDRT: No such process (when adding a loopback)

    - by Steve Rolfe
    I'm trying to configure two web servers for a load balancing server. The load balancing aspect works fine (it sees both server, kills 'em if it needs to, and seems to direct traffic fine). The only issue is with the servers looping: /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 IPADDR=<Virtual IP> NETMASK=255.255.255.255 ONBOOT=yes NAME=loopback Everytime I try a "service network restart" I get a SIOCADDRT: No such process when loading the loopback interface. Anyone have an idea what's causing this?

    Read the article

  • Drop in solution for logging to DB

    - by Jake
    I'm considering setting up our servers to log to a Mongo Database rather than log files. Logs will then be all on one server, queryable, and overall easier to manage. I'd love to find a solution that will allow all the different processes I have running to write to DB rather than files (or perhaps something to read the files, pass the logs on and truncate the files). I don't want to have to find a different solution for every process if I can avoid it. So, does anyone know of an existing solution to this problem?

    Read the article

  • Windows7 NFS with linux server

    - by Vitaly
    Hi. I have an Ubuntu server and want to access its web folder (/var/www). What I done: installed nfs-kernel-server, nfs-common and portmap (as in faq) Setted up /etc/exports: /var/www 192.168.1.0/255.255.255.0(rw,no_roow_squash,async,subtree_check) Then: sudo exportfs -ra Then: sudo /etc/init.d/nfs-kernle-server restart I checked, if all works on same machine: sudo 192.168.1.101:/var/www /mnt/test Then accessed /mnt/test and seen that all data present and all ok. Next, I tried to connect this folder to windows7 using NFS client: First, I checked, that linux exported path successfully: showmount -e 192.168.1.101 /var/www 192.168.1.0/255.255.255.0 All ok, go to mount: mount -o anon 192.168.1.101:/var/www z: Console said, that all success.. but. I cant access drive Z (drive exists in the system and point to right folder). When I try to access drive Z my Explorer just going to sleep and then say that timeout expired. Help me please.

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

  • tomato firmware unstable when I am connected to Chromecast

    - by Graviton
    I have an Asus RT-N12D1, and I installed the tomato firmware by shibby ( version tomato-K26-1.28.RT-N5x-MIPSR2-112-Max.trx). However I found that the connection can be really unstable. Whenever I stream youtube videos on TV via Chromecast ( I tried both the cast from iPad and Win7, both can reproduce the problem), the router will frequently restart itself, leading to a loss of connection and subsequent reconnection. I want to get this problem fixed, but I don't know how. Is there anyway where I can troubleshoot what is the real problem, or is there anyway I can collect enough log information so that I can then present to the original developer for a fix ( maybe)?

    Read the article

  • Running Webapp on Mac in UTC (either changing MacBook timezone or tomcat timezone)

    - by Andy A
    To run my web app, I need to set my timezone to UTC on my MacBook. I can do this temporarily by opening a Konsole and entering sudo ln -sf /usr/share/zoneinfo/UTC /etc/localtime However, my timezone returns to normal when I restart my machine! Any advice? Edit : The response to this question by 'Celada' implies that I can just make my Server UTC. I am using Apache Tomcat 7. Adding to Celada's response, how can I make it UTC? Update - 3rd April : Following Celada's response, I have tried adding SetEnv TZ UTC at the top of startup.sh. This didn't seem to make a difference. After some research, I tried adding export JAVA_OPTS="-Duser.timezone=UTC" to startup.sh, but this too had no effect. Am I adding the correct command to the correct file?

    Read the article

  • Have a service start on startup with Ubuntu

    - by Joseph Silvashy
    I'm not clear on how to start a service when the server boots, I read on some of the other questions asked about adding the script to /etc/init.d, but It's just one line that I need to execute in the commandline: sudo /etc/init.d/avahi-daemon restart But I have a few issues with this, firstly, I apparently need to use sudo, and it gives me the following: ngl-server-01:~% sudo /etc/init.d/avahi-daemon start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service avahi-daemon start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start avahi-daemon But when I try just avahi-daemon start I get: Too many arguments Why is this? and how would you start this service?

    Read the article

  • IIS SmtpSVC - Adding remote domains on the fly

    - by Andrej Pintar
    Since I am using SMTPSVC from IIS to send all mail out I have noticed some domains that reject mail regarding LFs and similar SMTP day to day basis problems. So I mostly re route these domains by using smarthosts. Now I aslo read that on IIS7 or most of them when you add a remote domain to domains you must restart SMTPSVC to take effect. I also enabled METABASE editing. So I also hoped that this will help me add remote domains on the fly. But it's not working. Should I use another SMTP: -hmailserver or similar to route DOMAINS by smarthost. We used a smarthost configuration before but ISP smarthost gets a lot on RBL Blacklist so mail comes back. Since DNS MX direct sending is more work because of troublesome domains now I got more work to monitor SMTP logs. Thank you in advance.

    Read the article

  • RHEL 5/CentOS 5 - sshd becomes unresponsive

    - by ewwhite
    I have a number of CentOS 5.x and RHEL 5.x systems whose SSH daemons become unresponsive, preventing remote logins. The typical error from the connecting side is: $ ssh db1 db1 : ssh_exchange_identification: Connection closed by remote host Examining /var/log/messages after a forced reboot shows the following leading up to the restart: Dec 10 10:45:51 db1 sshd[14593]: fatal: Privilege separation user sshd does not exist Dec 10 10:46:02 db1 sshd[14595]: fatal: Privilege separation user sshd does not exist Dec 10 10:46:54 db1 sshd[14711]: fatal: Privilege separation user sshd does not exist Dec 10 10:47:38 db1 sshd[14730]: fatal: Privilege separation user sshd does not exist These systems use LDAP authentication and the nsswitch.conf file is configured to look at local "files" first. [root@db1 ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # passwd: files ldap shadow: files ldap group: files ldap hosts: files dns The Privilege-separated SSH user exists in the local password file. [root@db1 ~]# grep ssh /etc/passwd sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin Any ideas on what the root cause is? I did not see any Red Hat errata that covers this.

    Read the article

  • MaxClients in apache. How to know the size of my proccess?

    - by Larry
    From http://httpd.apache.org/docs/2.2/misc/perf-tuning.html The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider "fast enough". This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes. The main issue is that I can't understand how to know the size, because, well i have the size of httpd on no more of 3888 But, if we need to determine the number for MaxClients, and I have 4GB of RAM, so I get: 972, so I should use like 900 in the MaxClients?

    Read the article

  • Windows Server 2008 (sp2) stops responding on network share requests from Windows Vista and 7 client

    - by Peter LaComb Jr.
    I have two Windows Server 2008 SP2 machines (TFS and TFSBUILD). Periodically, the TFSBUILD server shares (\TFSBUILD\ShareName or \TFSBUILD\C$) become unresponsive to requests from Windows Vista (Server 2008) and Windows 7 client requests. Windows XP machines are still able to connect. No events in the server log indicate any problem. A simple restart corrects the issue temporarily, but it always returns. No, it is not this http://support.microsoft.com/kb/976266 (we aren't using that software). All anti-virus software has been disabled, firewall is disabled by policy. No other network activity is affected. Any help would be greatly appreciated.

    Read the article

  • SOHO NETGEAR wireless router disconnects when downloading torrents

    - by Lirik
    I have a NETGEAR WGT624 router at home which dies when there is a heavy torrent load. I open up my torrent client and it downloads for about 5 to 10 minutes and it continues to increase the number of seeds (goes up to 70-80 seeds), but after that the router simply fails and I have to restart it in order to get an internet connection again. Is there any way that I can fix this? New router firmware? Change some router options? Feed it a cookie? Anything?

    Read the article

  • Find which php scripts cause high CPU with php-cgi

    - by Oli
    Background: I maintain a server for a client who has half a dozen Wordpress sites on. They all have the W3 Total Cache plugin installed and eAcellerator is installed (might be APC). All the PHP sites run through a single batch of fastcgi php-cgi processes (it's actually php-fpm but I'm not sure if that makes a difference). Problem: php-cgi's CPU usage is quite high. Not terminally high but high enough to raise an eyebrow. The client wants to add more sites in the future and I want to avoid becoming CPU limited if I can help it. Question: Is there any way I can find the scripts or even just requests that are causing the high CPU. I realise I might not be able to do anything with the results but it would give me a chance.

    Read the article

  • Receicing POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • Is Windows XP Pro not a good Hyper-V guest citizen?

    - by Magnus
    On my Windows Server 2008 R2 w. the Hyper-V role, I have these guest VMs: 3 x Windows Server 2008 R2 2 x Windows Server 2003 x86 2 x Windows 7 x64 1 x Windows XP Pro x86 In general, all machines are very fast and responsive. However, the Windows XP Pro guest is very sluggish. It can take up to 2 minutes to connect to the console/or a RD session. Sometimes it can "go into sleep" for several minutes. I have tried to add a 2nd CPU and more memory, but it doesn't help. When the issue happens, it's more or less impossible to get a responsive Task Manager up to analyze which process is hogging the CPU. But I have noticed that it can be various processes; lsass.exe, crss.exe etc. Integration Services is installed. Microsoft Security Essentials is installed, but I have tried without it, no difference. Any ideas?

    Read the article

  • Weirdness After Reinstall The Windows Operating System

    - by Eka Anggraini
    I want to ask, and ask advice from you guys. I have reinstall my OS and successful, I'm turn off the restart a few times is fine .. Later a few hours later, when I turn it on, a sudden there was an error : Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM You can attempt to repair this file by starting Windows Setup using the original Setup CD-ROM Select 'R' at the first screen to start repair. So I intend to repair, reinstall all again .. I enter cd windows : press any key .. with blue background text below: Setup is loading files (windows executive) Setup is loading files (hardware abstraction layer) Then I was waiting until half an hour, and no changes, I repeated the process several times is also nil. Please advice and solutions about the problem where? Hardware / cd Windows ?

    Read the article

  • Ubuntu not showing disk

    - by ojek
    I have a laptop which had broken windows 7 installed on it. I created a ubuntu live usb and tried installing ubuntu over that win7. After a few minutes, I got an error message, so I needed to restart the computer. Now the laptop says that there is no bootable device - reasonable message given that there was an error during linux installation. But: Bios can see my hard drive, When I start ubuntu in live mode, and try either sudo fdisk -l or gparted, it doesn't show any hard disk drives. I am 90% sure that hdd is broken, but it is wierd that bios can see it, and ubuntu doesn't. How can I be 100% sure about that hdd? Is there any additional way of detecting my hdd from ubuntu?

    Read the article

  • pecl_http extension not loading

    - by Tegan Snyder
    For some reason pecl_http extension is not showing up in my test.php file with contains: <?php phpinfo(); ?> I just installed pecl_http using: pecl install pecl_http The install was successful and I verified it by running: pecl list Installed packages, channel pecl.php.net: ========================================= Package Version State mongo 1.2.10 stable pecl_http 1.7.4 stable I then located my php.ini file using: php -i | grep 'Configuration File' Configuration File (php.ini) Path => /etc/php5/cli Loaded Configuration File => /etc/php5/cli/php.ini I edited it in vim and added: extension=http.so Finally I restarted Nginx and PHP-FastCGI: /etc/init.d/nginx restart /etc/init.d/php-fastcgi stop /etc/init.d/php-fastcgi start My PHP extension_dir is : /usr/lib/php5/20090626 I verified that "http.so" is located in that directory. Any ideas why it's not loading? My machine is running a Ubuntu 10.04 LTS 64bit Profile on Linode. The only other extensions I have installed are New Relic and Mongo. Thanks!

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >