Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 177/401 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • SCCM 2007 managing hosts in non trusted forest

    - by BoxerBucks
    I have an implementation of SCCM 2007 in forest "A" that manages hosts in that Windows 2008 forest. There is another forest/domain, "B", which I have no trust with that I need to manage hosts in as well. I don't need to push out clients from the SCCM console, I am going to install them manually. I just need the hosts in domain "B" to connect back to the forest/domain "A" for management purposes. To date, I have not added any AD objects to domain "B" for hosts to query for site, SLP or management point info. I am installing the hosts with the command line: ccmsetup.exe /mp:SCCM_Server /site:mysite SCCM_Server = FQDN of my sccm server (which is resolvable by the client) There are no ACL's between the two servers. From the logs, I can see the install complete and the client tries to query the local AD for the site info for "mysite" but it can't find it and it stops and never connects. Can anyone give me some direction as to how this should be setup?

    Read the article

  • Which user account should be used for WSGIDaemonProcess?

    - by Nathan S
    I have some Django sites deployed using Apache2 and mod_wsgi. When configuring the WSGIDaemonProcess directive, most tutorials (including the official documentation) suggest running the WSGI process as the user in whose home directory the code resides. For example: WSGIScriptAlias / /home/joe/sites/example.com/mod_wsgi-handler.wsgi WSGIDaemonProcess example.com user=joe group=joe processes=2 threads=25 However, I wonder if it is really wise to run the wsgi daemon process as the same user (with its attendant privileges) which develops the code. Should I set up a service account whose only privilege is read-only access to the code in order to have better security? Or are my concerns overblown?

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (~2-3k requests per second, running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • How to interpret output from Linux 'top' command?

    - by Ali
    Following a discussion made HERE about how PHP-FPM consuming memory, I just found a problem in reading the memory in top command. Here is a screenshot of my top just after restarting PHP-FPM. Everything is normal: about 20 PHP-FPM processes, each consuming 5.5MB memory (0.3% of total). Here is the aged server right before restart of PHP-FPM (one day after the previous restart). Here, we still have about 25 PHP-FPM with double memory usage (10MB indicating 0.5% of total). Thus, the total memory used should be 600-700 MB. Then, why 1.6GB memory has been used?

    Read the article

  • Setup VLAN agnostic ports on HP ProCurve 1810G (Ingress Filter, Trunking)

    - by Thomas
    I am wondering if it is possible to configure some ports of the web managed ProCurve Switch 1810G to participate in all VLAN traffic. Even if no VLAN with that ID has been set up inside the switch. The issue is that I have two virtualization servers that will use yet unknown VLANs of a certain range to communicate with each other. But the range is much larger than the 64 supported VLANs this switch can manage. The switch also offers static and LACP Link Trunks. But I guess there will also apply the Ingress Filter that drops packets with unconfigured VLAN IDs? A separate unmanaged switch that connects the two hosts and one ProCurve Port would work, but maybe I do not have to? Thanks

    Read the article

  • Windows activation on a Virtual Machine (Physical->VM)

    - by Daisetsu
    I backed up a number of laptops to virtual machines before they are to be re-purposed, in case I need the data at some later time. While the Physical to VM processes worked fine I am encountering issues on some of the VMs. When I boot them I get an error message saying I MUST activate windows in order to login. This is expected because the hardware changed (from physical hardware to virtualized hardware). I click the OK button and expect to be prompted with ways to activate, windows sits there for quite a while then tells me that "Windows has already been activated". I click OK at that message and get take back to the beginning where I am asked to activate Windows. I have done some fairly intensive googling but haven't been able to find a real solution. EDIT: The laptops with the issues are 2 Sony Vaios, I believe that they have the OEM version of the OS originally installed by the factory.

    Read the article

  • Windows 2003 Enterprise Server becomes unresponsive occasionally

    - by Derek Ivey
    We're experiencing issues with our Windows 2003 server, which runs SQL Server 2005 SP1. We notice that sometimes the entire server becomes unresponsive and I captured a screenshot of the task manager when this happened. I noticed that the processes are not displayed during this time and all of the memory information and handles disappear (as shown in the screenshot). Does anyone have any ideas as to what could be wrong with this sytem? I'm planning to take it down over the weekend to run Memtest86. Screenshot: http://dl.dropbox.com/u/2058/windows_screenshot.png The issue is resolved by a reboot, but I'd like to figure out the cause of this and get it fixed. I also tried to run a ping when this occurred and I got the following error in the event log: "Application popup: ping.exe - Application Error: The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." Thanks, Derek

    Read the article

  • Barracuda spam filter alternative - virtualization/appliance friendly?

    - by ewwhite
    I've sold and deployed Barracuda spam and web filters for years. I've always thought that the functionality was good (Barracuda Central, easy interface, effective filtering), but the hardware on the entry to midrange units is a weak point. They have single power supplies, no RAID and limited monitoring support. Personally, I think Barracuda would make a killing selling their software as a VMWare appliance, but I'm looking for something similar that I can deploy as a consultant, but will be easy for customers to manage. It should have support for server-grade hardware or the ability to be deployed as a virtual machine. Is there anything out there that's close?

    Read the article

  • systemd initiated uwsgi process shuts down after a while

    - by Calvin Cheng
    So I wrote this simple systemd service script:- [Unit] Description=uwsgi server script [Service] User=web Group=web WorkingDirectory=/var/www/prod/myproject/releases/current ExecStart=/bin/bash -c 'source ~/.bash_profile; workon myproject; uwsgi --ini /var/www/prod/myproject/releases/current/myproject/uwsgi_prod.ini' [Install] WantedBy=multi-user.target which works fine - it starts up and I can see my uwsgi processes in htop. However, it inexplicably shuts down after being idle for 5 minutes. If I start this process manually in bash console by executing, as web user:- source ~/.bash_profile workon myproject uwsgi --ini /var/www/prod/myproject/releases/current/myproject/uwsgi_prod.ini my process does not die after being idle. What could the problem be?

    Read the article

  • What's the difference between keepalived and corosync, others?

    - by Matt
    I'm building a failover firewall for a server cluster and started looking at the various options. I'm more familiar with carp on freebsd, but need to use linux for this project. Searching google has produced several different projects, but no clear information about features they provide . CARP gave virtual interfaces that failover, I am not really clear on whether that's what corosync does, or is that what pacemaker does? On the other hand I did get manage to get keepalived working. However, I noted that corosync provides native support for infiniband. This would be useful for me. Perhaps someone could shed some light on the differences between: corosync keepalive pacemaker heartbeat Which product would be the best fit for router failover?

    Read the article

  • How to unlock files using handle.exe and process name?

    - by Radek
    I tried Unlocker 1.9.1 but it doesn't work correctly for me on Windows7 (worked ok on Windows XP) and also I tried LockHunter 2.0.2.103 x64 and reported a bug but .... LockHunter actually unlocks the file from GUI but not from command line. So I want to use handle.exe by SysInternals to unlock one file "TestPro.log". I know the absolut path if it helps. I can list and all processes that locked the file by executing C:\Windows\system32>c:\edutester\progs\handle testpro.log java.exe pid: 2120 type: File 338: C:\Users\Public\TestPro \TestPro Automation Framework\Logs\TestPro.log java.exe pid: 1004 type: File 934: C:\Users\Public\TestPro \TestPro Automation Framework\Logs\TestPro.log What I need to know how to unlock the file using above info from command line automatically. No user intervention is possible. Windows 7 64bit Microsoft Windows [Version 6.1.7601]

    Read the article

  • error: cannot fork() for status: Resource temporarily unavailable (git)

    - by Elnaz Shahmehr
    when I want to do something: add , remove, pull , push in github, I just have this error in my terminal Thanks in advance! selnaz:iOS-Tidinfo Lnaz$ git add . error: cannot fork() for status: Resource temporarily unavailable fatal: Could not run git status --porcelain fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed Edit: selnaz:iOS-Tidinfo Lnaz$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited Edit2 selnaz:iOS-Tidinfo Lnaz$ ps xfu | wc -l ps: illegal option -- f usage: ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]] [-u] [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]] ps [-L] 0

    Read the article

  • Copying files locally on a network folder

    - by Altay_H
    I noticed that copying a large file from one location on a network drive to another location on the same network drive takes much longer than copying it locally. Instead of copying the file locally, the network computer sends the file to my remote computer, which sends it back to the same network computer. This means the files are being transferred over the network completely unnecessarily. Is there a way to fix this issue? It's becoming a real hassle to manage the video files on my network drive. Note: This is the case with both Windows and Linux (using Samba) network folders.

    Read the article

  • Hyper-V Manager: right-clicking on remote VM crashes MMC snap-in

    - by Greg Bray
    I have a Windows Server 2008 R2 Enterprise SP1 machine that I log into and use to manage virtual machines running on multiple Hyper-V servers on our domain. Sometimes, when I right-click on a remote VM, the Hyper-V Manager will crash and display the following error message: If I use the Actions menu on the lower right, it works just fine, but for some reason right-clicking causes MMC to stop working. Is there any way to fix this issue? Here are the full details of the error message. Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: Microsoft.Virtualization.Client Problem Signature 05: 6.1.0.0 Problem Signature 06: 4ce7c9e3 Problem Signature 07: 342 Problem Signature 08: 1f Problem Signature 09: System.OverflowException OS Version: 6.1.7601.2.1.0.274.10 Locale ID: 1033 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

  • Need script to redirect STDIN & STDOUT to named pipes

    - by user54903
    I have an app that launches an authentication helper (my script) and uses STDIN/STDOUT to communicate. I want to re-direct STDIN and STDOUT from this script to two named pipes for interaction with another program. E.g.: SCRIPT_STDIN pipe1 SCRIPT_STDOUT < pipe2 Here is the flow I'm trying to accomplish: [Application] - Launches helper script, writes to helpers STDIN, reads from helpers STDOUT (example: STDIN:username,password; STDOUT:LOGIN_OK) [Helper Script] - Reads STDIN (data from app), forwards to PIPE1; reads from PIPE2, writes that back to the app on STDOUT [Other Process] - Reads from PIPE1 input, processes and returns results to PIPE2 The cat command can almost do what I want. If there were an option to copy STDIN to STDERR I could make cat do this with a command (assuming the fictitious option -e echos to STDERR rather than STDOUT): cat -e PIPE2 2PIPE1 (read from PIPE2 and write it to STDOUT, copy input, normally going to STDERR to PIPE1)

    Read the article

  • Push Trunk or Push Branch to Production

    - by coffeeaddict
    I'd like to get an idea of some processes on build process with Tortoise SVN. Primarily I'm wondering do you push: The Mainline Trunk A branch after QA has grabbed it into a working copy locally and tested the branch and then some build pushes that branch The problem I have is I work at a craphole (hey, it is what it is and I'm venting on stackoverflow, you better believe it..good way to relieve stress due to complete utter chaos) and we have no formal process for pushing anything. In fact even worse my boss directly codes against production. When I have changes, he pushes the mainline trunk. The problem becomes when I make database changes on our Dev database for lets say Branch A. Well...that breaks Branch B and C. I have like 4 projects going on at once! Why? Well, I will not get into that (chaos). So consequently I rename a table field, or add a field or whatever in SQL Server and walla, now my other branches have stale code pointing to previous field names. So what happens? I have to merge certain changes to this branch, to that branch, etc. It feels like a war zone. Finally, what happens is I try to only merge the minimum. Lets say I made DB changes for Branch A's code but now I had to jump back on Branch B's project. Well I need to merge "some" of A's changes over for those database changes so that B's code is not going to bomb out and is able to work with the new table changes. Finally boss pushes the mainline trunk to production. Now I get an email "you forgot to remove the hyperlink for this". That hyperlink was actually a feature I added in Branch A. But what he's talking about here is he just pushed the mainline trunk to production which now has my merged changes from Branch B and any database scripts for Branch A because remember I had some DB changes and if he pushes code, it's got to reflect those changes thus some partial database changes must also be pushed even if it's not related to this project. Well...I missed the hyperlink, so kill me. Maybe that's why we need a build process boss? (sorry, it's been a nightmare working here which is why this thread is getting so detailed). Anyway, obviously this is a nightmare. And he dictates almost everything. The only reason we have source control is because I've worked on hard core teams and that's the first thing you setup. Well there was none here. Problem is I can't dictate the structure..he does but he's never really used source control!! My God. So we have no QA. This is an e-commerce website. That's another huge issue. So consequently I'm expected to be perfect. That means mainline trunk needs to be perfect for whatever we're pushing, whatever branch feature. Is this luda? wtf do I do? I could go off on him after tying so many times tactfully to explain that we need a freakin build process (not just copy local mainline trunk to production!) but I've tried to push before and got yelled at. So I gave up on that. So it will help me tremendously to know how others are pushing their source from Tortoise to production. I was not the person pushing when I was on previous teams so really I'm not too versed in build processes. We are a fairly good size e-commerce site and get a couple millions hits a month.

    Read the article

  • Open source alternative for Canonical Landscape?

    - by netvope
    From Canonical: Landscape is an easy-to-use systems management and monitoring service that enables you to manage multiple Ubuntu machines as easily as one through a simple Web-based interface. However, Landscape is not free. The RedHat counterpart Satellite has a free version called Spacewalk, but it doesn't work on Ubuntu. (There is an attempt to port Spacewalk to Debian, but it doesn't look like it's stable yet.) Are there any open source alternative to Landscape? Better yet, are there any Spacewalk-like software that works for both RedHat-based and Debian-based systems?

    Read the article

  • IE8 Crash On Startup

    - by Wonko the Sane
    Hello All, I have a problem - IE8 has suddenly stopped working. I was doing some development work, and suddenly IE8 crashed every time I tried to use it. I've rebooted, but still crash every time I try to open it. There is no real indication as to what has gone wrong. It just says "Internet Explorer has stopped working". I've tried almost everything that I've seen when I've Googled it: I disabled all Add-Ins via Manage Add-Ins, but it still crashes on startup* I re-registered IEPROXY.DLL * The thing I don't understand is that if I start from the Run box with the -extoff option, IE starts up. Isn't that the same as disabling all Add-Ins? Any ideas? Thanks, wTs

    Read the article

  • Is there an unintrusive antivirus program that I can ask to scan object on demand only? [closed]

    - by Faken
    Possible Duplicate: Recommended offline on-demand virus scanners I'm looking for an unintrusive antivirus program that I can get to run scans on suspicious objects on demand and only on demand. Most other antivirus programs install many layers of protection and things running in the background and preform regular updates and system scans at inconvenient times. I want an antivirus program that I can simply right click and object and select "scan for viruses" and nothing more. Is there a reliable antivirus program out there that offers this and only this without the automatic updates, background processes, and intrusive automatic system scans? Note: this is for Windows.

    Read the article

  • Linux halts every few seconds

    - by Zeppomedio
    We're having an issue where one our Linux boxes (Ubuntu 10.04 LTS, running on EC2 with a quadruple-large size, 68GB of RAM and 8 virtual cores with 3.25GHz each) freezes up every few seconds. Typing in an ssh session will freeze, and running strace on one of the Postgresql processes that's running usually shows: 02:37:41.567990 semop(7831581, {{3, -1, 0}}, 1 for a few seconds before it proceeds (it always gets stuck at that semop). OProfile shows that most of the time is spent in the kernel (60%) versus 37% in Postgresql. The result of these halts (which began suddenly a day ago) is that load on the box has gone from 0.7 to 10+, and causes our entire stack to slow done. Any ideas on how to track down what's going on? iostat doesn't show the disks being particularly slow or overloaded, and top shows user cpu % spike from 8% to about 40% whenever these back-ups happen.

    Read the article

  • Amazon EC2: Not able to open web application even if port it opened

    - by learner
    I have a t1.micro instance with public dns looks similar to ec2-184-72-67-202.compute-1.amazonaws.com (some numbers changed) On this machine, I am running a django app $ sudo python manage.py runserver --settings=vlists.settings.dev Validating models... 0 errors found Django version 1.4.1, using settings 'vlists.settings.dev' Development server is running at http://127.0.0.1:8000/ I have opened the port 8000 through AWS console Now when I hit the following in Chrome http://ec2-184-72-67-202.compute-1.amazonaws.com:8000, I get Oops! Google Chrome could not connect to WHat is that I am doing wrong?

    Read the article

  • Low-cost, Flexible Log Aggregation [closed]

    - by Dan McClain
    I'm starting to have quite the collection of Ubuntu VMs that I must manage. I'm starting to investigate Puppet for managing the configuration of all of them, and apticron to let me know what's out of date. But the issue I feel I should deal with sooner than later is log aggregation. I'd like to stay in the free/open source realm for now, seeing that we don't have much budget for something like splunk yet. In addition to syslog, I would like to collect application specific logs (We are running different apps on different machines, from nginx+passenger for rails, to Apache+Tomcat for java, to PHP for expression engine, and mysql/postgresql database server), so that we can analyze the relavent data. For now, I'm just looking to get all the logs one place.

    Read the article

  • page allocation failure - am I running out of memory?

    - by mfriedman
    Lately I've noticing entries like this one in the kern.log of one of my servers: Feb 16 00:24:05 aramis kernel: swapper: page allocation failure. order:0, mode:0x20 This is what I'd like to know: What exactly does that message mean? Is my server running out of memory? The swap usage is quite low (less than 10%), and so far I haven't noticed any processes being killed because of lack of memory. Additional information: The server is a Xen instance (DomU) running Debian 6.0 It has 512 MB of RAM and a 512 MB swap partition CPU load inside the virtual machine shows an average of 0.25

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >