Search Results

Search found 15099 results on 604 pages for 'stop loading'.

Page 496/604 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Win 7 crashes, PC reboots and says "Hard drive 0 not found" until I turn if off and on again

    - by Danny T.
    I recently made the move from Windows XP to Windows 7. Since then, when my computer is on for a few hours it always ends up rebooting without warning. Then the BIOS won't recognize my hard drive (hard drive 0 not found). If I turn off my computer and then on again, it boots normally. Some details: Dell Dimension 9150 Windows 7 I updated the BIOS I updated all system drivers with the latest version from Dell (SATA, Chipset, etc.) Other drivers updated too (Graphic card, sound, etc.) There is one thing that I tried after some Googling: I turned off the DMA access to the drives, but it's still rebooting after a few hours. Any clue? UPDATE 2010/12/13 Here are the events from the Event Log for today, from when I turned the computer on until it crashed: 19:17 - Error - ID 10016 - DistributedCom 20:06 - Error - ID 1008 - Customer Improvement Program (could not send data to Microsoft) 21:48 - Critical - ID 41 - Kernel-Power (System was restarted without proper shutdown) 21:48 - Error - ID 6008 - EventLog (Previous system down was not planned) 21:48 - Error - ID 1101 - EventLog (Audit Event ignored) 21:49 - Error - ID 10016 - DistributedCom Both DistributedCom events have a description along these lines (translated from French): The authorisation parameters specific to the application are not allowing Local Execeution for the COM server application with the CLSID {C97FCC79-E628-407D-AE68-A06AD6D8B4D1} and the APPID {344ED43D-D086-4961-86A6-1106F4ACAD9B} to the SID AUTHORITY NT\User System (S-1-5-18) from the address LocalHost (LRPC usage). This security authorisation can be changed with the Component service administration tool. UPDATE 2010/12/31 Here are the error messages I have on blue screens : STOP C000007xA - Kernel_Data_Inpage_Error "Unkown hard error" C00000135 - Can't start because &hs is missing

    Read the article

  • Dell laptop keyboard doesn't work

    - by Tam
    I'm trying to fix my in-laws laptop, it's a Dell Studio 1745 that's running Windows 7 64 bit. The problem is that most of the keys on the keyboard do not work. The function keys work and the caps lock and numpad keys work, but no other keys do. If I hit the F2 key enough times when starting up, I can get to the BIOS, but after that even the function keys stop working. If I let it go all the way to the Windows login screen, I can see that the caps lock and num lock work - little images on screen actually appear, but they don't toggle the state of the key, i.e.,capslock is always off, numlock is always off. Using the fn+function combo works, so changing the brightness, etc. works fine. I'm stumped. I've tried disconnecting power and battery and leaving it for an hour or so before starting up but that hasn't helped either. Also - this might be a red herring - the touchpad is failing as well, the MS Device Manager says that it's failing with status 10, "unable to start device"

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Windows Explorer Keeps On Crashing

    - by Josefvz
    Hey Folks. I'm lost... I'm using Windows 7 Ultimate 64bit. My Pc is up to date(windows updates) and I've used Winutilities to scan my registry. My explorer.exe keeps on crashing. Just randomly it seems. I don't even need to be doing anything particular. I do have experience with pc in general as I'm a software developer. I know you will require additional info, but i don't know what, so just leave a comment and I'll update. Additional info I think i should also mention that explorer is the only app that crashes on my pc. The crash report i got now: Description: A problem caused this program to stop interacting with Windows. Problem signature: Problem Event Name: AppHangB1 Application Name: explorer.exe Application Version: 6.1.7600.16450 Application Timestamp: 4aebab8d Hang Signature: 0a1b Hang Type: 16897 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 7177 Additional Hang Signature 1: 0a1bdae38ae7300761c516c4416d992c Additional Hang Signature 2: 1c51 Additional Hang Signature 3: 1c518a49cc7d37652d26c521e96f66c2 Additional Hang Signature 4: 521e Additional Hang Signature 5: 521e607ec26a72aab4ae5a7126916ef3 Additional Hang Signature 6: e5e3 Additional Hang Signature 7: e5e3ca31dad607fa7b858ff5ea5c0fa9

    Read the article

  • Vmware player change dhcp server settings

    - by Tathagata
    I have a Windows Server 2003 running from a Vmware player on Win 7 box. The idea is to test Windows Deployment service in the virtual network. Is it possible to configure the vmware dhcp server with WDS related stuff(option 66, 67)? I found a few references where people were using the vnetlib.exe to start, stop the dhcp serverchange the subnet mask etc - but there's no info on how to get set the dhcp server options. DHCP config from the virtual network editor I do have the Workstation, without the license for it. In the Virtual network Editor, the DHCP settings for the network I'm using, only allows me to set the subnetmask, IP ranges and stuff like that. But not the dhcp options. DHCP server on the WDS server Authorizing the DHCP server in the guest WDS server fails. The VMware player can run its own dhcp server fro the virtual network with out any authorization from the Active directory - can I do the same, with Win dhcp server in the guest Win Server? ~~~~~ Can I authorize W2K8 DHCP server for private network, even when prohibited in enterprise network? says we have to run a third party dhcp server... :/.

    Read the article

  • W7-pro indexing mydoc on disk partition does not work

    - by Yvan Thery
    I am working on a HP-7100 mini tower running W7 Pro 64bits. My Local HD includes C:/ + 2 disk partitions : all my documents are located on disk partition L:/ and all my media files are on disk partition M:/ The indexing process works well on C:/ and M:/ but does not index the L:/ any more also all of them are allowed to be indexed, also the system is present on all drive security tabs. I have tested to rebuilt the indexing file with a new setting including few directories present on drive C/M/L but still with L: does not work ! One more thing I can tell you is that even after rebuilding the indexing file, I can find some residual directories or files which are out of the test selection. It is like unerased components remaining in the indexing database. As I do not know precisely how the indexing process works it is hard to know what to do ... Recently I had a bad time after using a past restoration procedure ... maybe it did corrupt the indexing file ???? If I start indexing the all L:/ disk partition the system stop at 39 found index also many more are existing .... Does any one from you guys could advise the process to create a new indexing database ... ? Any idea to get out of this mess ? Many thanks for assistance Yvan

    Read the article

  • How to disable Utility Manager (Windows Key + U)

    - by Skizz
    How do I disable the Windows + U hotkey in Windows XP? Alternatively, how do I stop the utility manager from being active? The two are related. The utilty manager is currently providing a potential security hole and I need to remove it[1]. The system I'm developing uses a custom Gina to log in and start a custom shell. This removes most Windows Key hotkeys but the Win + U still pops up the manager app. Update: Things I've tried and don't work: NoWinKeys registry setting - this only affects explorer hotkeys; Renaming utilman.exe - program reappears next login; Third party software - not really an option, these machines are audited by the clients and additional, third party software would be unlikely to be accepted. Also, the proedure needs to be reasonably straightforward - this has to be done by field service engineers to existing machines (machines currently in Russia, Holland, France, Spain, Ireland and USA). [1] The hole is via the internet options in the help viewer the utility app links to.

    Read the article

  • How to tell Linux to explicitly swap out main memory of a suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • Firefox won't start

    - by Daniel R Hicks
    OK, I've got this problem again, only this time the problem only seems to affect Firefox and Thunderbird. Rebooted several times. Tried resetting to the last restore point, but that didn't work. Tried setting a new Firefox profile, and that didn't work either. The symptom is that you click on the Firefox or Thunderbird icon, the process appears in the Process Explorer list, but the window never opens. Curiously, if Firefox has been "started" this way, Internet Explorer hangs starting until I kill the Firefox process. Any ideas? I suppose the next thing to try is uninstalling and reinstalling Firefox/Thunderbird, but this whole thing is getting old. The box is a Sony Vaio running Windows Vista. It was completely restored from scratch less than two weeks ago, after the last fiasco. (I'm suspecting that my aborted install of Acronis True Image may have mucked things up this time.) Sigh! Another symptom: It occurred to me to try printing something, but if I open "Printers" it just sits there "searching". So something is rotten in the bowels of Windows. Minor update: It occurred to me to kill Internet Explorer (where I'd attempted printing). Then Printers comes up fairly quickly -- with no printers defined. Clicking "Add a printer" does nothing. Update: Well, following this suggestion to stop and restart the print spooler brought the printers back. And, wonder of wonders, Firefox now starts OK. Stopping and restarting the print spooler!!

    Read the article

  • Quickly close all Word and Excel instances?

    - by dyenatha
    Suppose I open 10 Word files and 10 Excel files and make no changes, how do I quickly taskkill all at once? Because I must repeat several attempts to replicate race, I'm hoping for a command-line solution. I'm willing to try PowerShell and cygwin (1.5) if necessary. The OS is Windows XP SP3 with current patches (still IE7). I tried "taskkill /pid 1 /pid 2 /t" where 1 is PID of EXCEL.EXE and 2 is PID of WINWORD.EXE, but it closed only 1 window of each program. I'm trying to replicate a race where an add-in for Microsoft Office 2007 fails to exclusive-lock one of its own files, which caused the 2nd Office program to stop exiting with a warning: System.IO.IOException: The process cannot access the file 'C:\Documents and Settings\me\Application Data\ExpensiveProduct\Add-InForMicrosoftOffice\4.2\egcred' because it is being used by another process. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.StreamWriter.CreateFile(String path, Boolean append) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding, Int32 bufferSize) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding) at System.IO.File.WriteAllText(String path, String contents, Encoding encoding) at ExpensiveProduct.EG.DataAccess.Credentials.CredentialManager.SaveUserTable() at ExpensiveProduct.OfficeAddin.OfficeAddinBase.Dispose(Boolean disposing) at ExpensiveProduct.OfficeAddin.WordAddin.Dispose(Boolean disposing) at ExpensiveProduct.OfficeAddin.OfficeAddinBase.OnHostShutdown() at ExpensiveProduct.OfficeAddin.OfficeAddinBase.Unload(ext_DisconnectMode mode)

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • Apache2.2 not responding or logging anything on Win 7

    - by Adam
    I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't time out as such, the browser just keeps on waiting forever. Nothing is recorded in either the error log (set to debug level), the access log, or Windows' Event Log. The problem showed up when I added a new VHost and restarted, however a syntax check has shown there's no problem with the config (from the little I changed), and the service does actually start error free. I've also disabled VHosts and tried with just localhost. I've tried to telnet to the web server, and it connects, but nothing happens. The prompt just goes blank and I can't type anything, and effectively become stuck. I've ensured there's a rule within Windows Firewall for Apache, and I've even disabled the entire thing just to check it wasn't the cause. Still the same. If I stop Apache however, the request fails immediately. I've uninstalled and reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've tried using a different port but nothing different. Does anybody have any suggestions to fix this? Or to perhaps try and figure out either if it's Apache itself not responding or something sitting between the two that's holding things up? I'm not too savvy on debugging Windows issues like this and I've been searching for hours but not found anything of use to me. Cheers Adam

    Read the article

  • Games consoles won't connect through the TP-Link TL-WA500G Access Point

    - by Manfred Wolff
    I hope that someone can help me. I have several Laptops and other devices, all using my Wireless Router (Sky Digital Netgear) To extend the range to the back of the house, I purchased a TP-Link TL-WA500G Range extender. configured just as a pure repeater, it picks up the signal from the Netgear Router. The Netgear Router does the DHCP, handing out the IP addresses. This all works a treat with several different laptops and my iPone4S, but when my son tries to use his XBox360, Sony Playstation3 or the Nintendo Wii those devices fail to acquire an IP address. They just sit their waiting for the IP config. This also happens with my wife's HTC desire ONE Android phone. My son says that, when his HTC Desire C won't get an IP address, he just unplugs the AP briefly - the phone will connect and he puts the AP back on. Once he is connected to the Router, the AP won't disturb function. The Games Consoles don't seem to work like that. They stop working, when the AP is reconnected. I had my son try to configure permanent IP addresses, and he said that did not work either, though I have to confirm that, as I did not see that for myself. Has anybody seen this before? I have searched the Net and have not found any similar problems anywhere. I wonder if there is setting somewhere that would fix this. Many thanks for anyone reading this and trying to help. M

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Symantec Antivirus Corporate -- two problems

    - by Alex C.
    We have a Windows network with a domain and about 50 clients. A few months ago, we installed Symantec Antivirus, Corporate Edition ver. 10.1.8.8000. There are two problems. The larger problem is that the software isn't very good at stopping viruses. In the last month, four different machines have become infected with those viruses that masquerade as antivirus software. Two machines I was able to clean with MalWareBytes. The other two were hopeless, and I had to reinstall Windows. Is there something I can do to make the Symantec product more effective? As far as I can tell, it successfully updates definitions nightly and pushes the definitions to the clients. The smaller problem is that the Symantec client applications sometimes initiate scans at random (and inappropriate) times. One of my co-workers complained to me yesterday that her computer was running very slow. I looked at the scan history and found that Symantec had scanned the computer three times during the past two days, and each time during the workday. No threats were found. Not sure why it's doing this, but I'd like it to stop. Any help would be appreciated. Thanks.

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • What diagnostics are safe to run on an SSD drive?

    - by Peter Mounce
    I have a MacBook Pro (late 2010) with a Crucial RealSSD 256Gb in it; 60Gb is given to the Windows 7 x64 BootCamp partition. I have a USB-attached 500Gb drive for (most) data. In the last day or so, I've had a BSOD and several OS freezes (both Mac OSX 10.6.6 and Win7). The system in both cases will boot fine (at the moment!) and then run things fine, then some time later a program will stop responding, followed shortly thereafter by the system as a whole, forcing a reboot. This smacks to me of a storage problem. Given that I have an SSD and not a regular magnetic HDD, what are my next steps, in both OS'? I haven't seen anything pertinent in Windows' event-log. I'm not sure of the equivalent place to look in OSX; it's never given me issue to find out. What are my options for attempting to save my data from the SSD to another drive, given that after some small amount of time (eg half an hour), the OS stops responding? What are the recommended next steps?

    Read the article

  • Active Directory FRS problems. 13508 error and other problems

    - by user59232
    I have 3 Domain Controllers. We will call them DC1, DC2 and DC3. DC3 and DC2 show Event ID 13508 in their FRS logs with no follow-up event(13509 I think) to say the error had been fixed. DC1's FRS log no matter what you do never shows any events besides FRS service stopped and started. DC1 holds the SYSVOL that needs to be replicated to the other DC's. The other DC's sysvol folders are empty. I have tried the burflag method of fixing this but I haven't had any luck. My procedure for that was to stop all FRS services on all DC's. Then set the burflag on DC1 to D4 and the other two DCs burflag to D2. Started FRS on DC1 and the only event's I see in DC1's FRS event logs are service stopped and service started messages. This fact is leading me to believe that something is wrong on FRS for DC1. I believe there should be events 13553 and 13516 in the FRS event logs after an authoritative sysvol restore. The other two DC's do not have anything in their SYSVOL, otherwise I would have made one of them the authoritative sysvol. DC1 is MS Server 2003 Enterprise Edition SP2 DC2 is MS Server 2003 Standard Edition SP1 DC3 is MS Server 2003 R2 Standard Edition SP2 I did not setup this domain originally but I am now the administrator of it, so I don't have a lot of background on why certain things may have been done in the past. My main goal is to try and fix these issues to get myself better prepared to decommision DC1 and add a DC running Server 2008 to my domain. Thanks.

    Read the article

  • How to force the start and end date of a task in Microsoft Project to be on the same day?

    - by Hauke P.
    I have a task called "Interview person A about topic X". The task's duration is set to 2 hours. The start date of the task should automatically be calculated taking dependencies and resource availabilities into account. My question boils down to: How can I force this task to start and end on the same date? Background: In my case, Microsoft Project sets the start date to a Friday at 5pm. As my working hours are set to 8am to 12am and 1pm to 6pm (Mon-Fri), Microsoft Project "splits up" the task at 6pm on Friday and plans to continue it at 8am on the following Monday. However, it does not make any sense to stop the interview on a Friday and restart it on Monday. Therefore the automatic suggestion is not helpful in this case. That's why I'm looking for a way way to force the task to start and end on the very same day. (In my example, I'd like Microsoft Project to delay the start date of the task until Monday 8am as this is the first time slot in which the task "fits in completely".) By the way: I have lots of such cases... for that reason it would be really great if there was a solution that doesn't just deal with this single special case.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >