Search Results

Search found 21395 results on 856 pages for 'process accounting'.

Page 212/856 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Office 2010 always reconfiguring itself on startup

    - by Rhys Gibson
    I've just installed Office 2010 Professional Plus (upgrading from Office 2007). It works fine under my admin account, but when I login with my wifes non-admin account, every time I open a document or start an app (Word, Excel, Publisher ...) Office 2010 goes through its configuration process (starting the the standard install dialog and then running the bootstrap process) before it loads the app - which wastes 2-3 minutes. Once it's done this, the app runs fine and I can make setting changes that are remembered when it restarts, but I can't work out why it thinks it needs to configure the app each time. Any thoughts?

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Empty sshd_config file

    - by Thomas
    I run a Centos 5 server with a LAMP stack. I was told this morning that the server was down not serving web content. I then tried to restart httpd but it failed due to another process was listening on port 443. I checked what process was running on 443 using netstat and it was sshd. I then checked the sshd_config file to check the ports that sshd was running on but the sshd_config file was completely blank. I than ran chkrootkit and it flagged not suspicions. What could of caused the sshd_config file to be blank, and sshd service to be restarted? I would really value your thoughts. All the best.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • ubuntu Grub boot hangs on external usb drive

    - by schoetbi
    Hi, i just gave xubuntu another try and installed it ordinarily on a external usb harddisk. I have another harddisk installed inside the laptop that has Windows Xp on it. Now the problem: When I boot from the external drive the boot menu of Grub 2 shows up and i see all installed bootable partitions including windows. Now I select Xubuntu and wait. When the Xubuntu Logo shows up the boot process hangs. Now the funny thing. When the logo shows up I can unplug the usb drive and reconnect it real fast. Then the boot process will continue!!! Since I am a Linux newbie I would appreciate every hint that can solve this so that I can enjoy a smooth Linux boot:-) EDIT Grub version is: tobias@ubuntu:~$ grub-install -v grub-install (GRUB) 1.98+20100804-5ubuntu3 Kernel is: tobias@ubuntu:~$ uname -r 2.6.35-23-generic Xubuntu is 9.10 Thanks

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • SQL Server Migration Assistant for Oracle problem

    - by Paul
    I've recently installed SSMA on my computer and after connecting to both the Oracle instance (which holds the database to be converted) and the SQL Server. I've mapped the needed schemas from oracle to mssql. The problem is that when i click on the report button for the assessment report there's an error popping up: Assesment Error : Nothing to Process The output window states: Starting conversion... Analyzing metadata... Conversion finished with 0 errors, 0 warnings, and 0 informational messages. There is nothing to process. Has anyone got experience with SSMA. I can't figure out what I am doing wrong. Thank you.

    Read the article

  • Distributing application updates using SCCM 2007

    - by theraneman
    Hi all, If there are any System Center Config Manager (SCCM) users out there, please clarify my doubt here. I have used the ConfigMgr console to distribute a custom application to a client machine. Now I need to distribute some updated files of that application. Isnt's it possible to add those files in the same package source used earlier and advertise again? Or should I use the SCCM software update section for this? Not sure if its only me, but the Software distribution process looks much easier than the Software Updates process in SCCM 2007. Please do let me know if there any online tutorials which explain how to update a custom application. Any help much appreciated.

    Read the article

  • Is chroot the right choice for my use case?

    - by Anthony
    Backstory: I am working on setting up a MineCraft server and want to allow admins to have ssh access to the MineCraft server console and appropriate mc server files, but not the whole system. The console provided by the minecraft server is only available to the user that launched the process. In addition, the admins will need terminal access to some basic cli tools such as wget, cp, mv, rm, and a text editor. Plan: I have already setup the ssh aspect of things, requiring pre-shared keys and whatnot. Setup a jailed environment in which all user activity will be contained. Setup user accounts. - The first user account will be the minecraft user. The minecraft user will start the MC server in a multiuser screen session and allow the other admins to attach to it. - Subsequent users should have their own /home directory for normal usage. Setup acl for the appropriate files to allow each user to edit the mc server files. No one will be doing system updates, nor will anyone be installing any programs, so I'll be the only user with sudo. The Issues: I don't want the ssh users to have access to the whole system. Users will still need to use wget or curl to update the mc server files. Is chroot the right tool for this use case, or is there something more appropriate for the job? I have no experience setting up a chroot environment and have found several tools to aid in this process. Jailkit seems to be the most robust, but it's not in the standard repos.

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • Is LSB's init script function "start_daemon" really used for real daemons or should I stick to start-stop-daemon?

    - by Fred
    In the context of init scripts, according to the LSB specification, "Each conforming init script shall execute the commands in the file /lib/lsb/init-function", which then defines a couple of functions to be used when using daemons. One of those functions is start_daemon, which obviously "runs the specified program as a daemon" while checking if the daemon is already running. I'm in the process of daemonizing a service app of mine, and I'm looking at how other daemons are run to try to "fit in". In the process of looking how it's done elsewhere, I noticed that not a single daemon on my Ubuntu 10.04 machine uses start_daemon. They all call start-stop-daemon directly. Same goes for my Fedora 14 machine. Should I try to play nice and be the first one to use start_daemon, or is there really no point and start-stop-daemon is the way to go since everybody is already using that? Why is there no daemons using LSB's functions?

    Read the article

  • Is there a way to make scp run faster on a Mac OS X?

    - by paul_sns
    I'm trying to a upload a Flex generated SWF file from my Macbook (running Snow Leopard) using the command scp main.swf server.com:/ I had setup key authentication to prevent typing the user/pass every time. This process normally takes up to two minutes using my connection at home (768kbps down/300+ kbps up). The interesting part is that when I use WinSCP in my Windows XP machine, the process only takes 30 seconds max. Both my MacBook and Windows XP machine use the same internet connection. The MacBook is connected to the router via cable (which should be faster right?) while the Windows XP connects through Wifi. Let me know if you need additional information in order to diagnose the problem. Thanks!

    Read the article

  • How to check if a server runs in pressing mode

    - by Ice
    Hi there, Layout: i have at customer side a server (win2003 R2 SP2 standard edition 32-bit) with a sql-server 2005 and some databases. This system starts with the /3GB-Switch. The system reports 3.25 GB RAM and taskmanager reports the process of sqlserver.exe with 2758255 K as the process with the highest consumption. The OS separates RAM for applications and for itself, normaly 50:50. But here we have the /3GB-Switch aktivated and i think the part for the applications is more than 50% of RAM. Knowledge (or better not knowledge): Somebody told me that if the OS runs out of memory within his part of RAM, the server runs into pressing mode. Questions: What is this pressing mode? Is pressing mode possible at all in this szenario? What should be done to get more performance out of this sql-server, beside optimizing the database and all this stuff.

    Read the article

  • Building a complete program?

    - by Bob
    Reading books, watching videos, and reviewing tutorials is all very easy. Taking notes and actually learning the material may be slightly harder, but even then, anyone with a decent brain and a fair amount of interest, it's easy enough (not to mention, fun). The thing is, it doesn't really prepare you to write a full program or website. Let's say you're those teens (only in highschool, no true (college level) computer science or programming courses, and no real world experience), and you come out with Groupon. Or even Mark Zuckerburg, sure he was a genius, and he was a very capable programmer... but how? How do you recommend that people who are not necessarily new to programming, but new to programming real applications and real programmers go about developing it? What is the "development process" - especially for single programmers (or maybe 2-3 teens)? Also, as far as web development goes, what is the process? Was something like Facebook or Groupon written with a framework (like CodeIgniter or Zend for PHP)? Or do they develop their own frameworks? I'm not asking how to come up with a great idea, but how to implement great ideas in an effective way? Does anyone have advice? I've read a couple of books on both C and C++ (primarily the C Programming Language and the C++ Programming Language) and taken AP Computer Science (as well as read a few additional books on Java and OOP). I also have read a few tutorials on PHP (and CodeIgniter) and Python. But I'm still in highschool, and I'm technically not even old enough to work at an internship for a few more months.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Seeking a solution to automatically copy files from the cd-rom disk to the USB drive once it's connected.

    - by Ray Nathan
    I plan to distribute a free CD that automatically copies files to a connected usb device. This process will be done on the computers of the users that obtain the cd. The CD will contain an autorun.ini file that will instruct the computer to copy a set of files located on the cd..to a specific directory on the connected usb device. The usb drive letter is not the same on all the systems, therefore...Windows XP should automatically know the drive letter of the usb device before the copy operation begins. What would be the best way of creating a short batch file or script that I can place on the CD to execute this process? Also, please note that it is NOT feasible or recommended to include a batch file on the USB devices to sync this operation due to the explanation at the beginning of this paragraph. :) Thank You All

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • How to find spyware dll launched using svchost.exe

    - by Sheen
    This weekend I found my PC was possibly infected by some virus or spyware. There is one "svchost.exe -k netsvcs" in my task manager, and it is running under my user name, rather than SYSTEM accounts. There is already another same process with same command line options under SYSTEM account. This user account svchost.exe consistently consumes 50% CPU (1 of 2 cores of my CPU). In Process Explorer, I can see it is started by explorer.exe, instead of services.exe. However, I failed to find its real service dll place in registry or disk. Does anyone know how to find this malicious program?

    Read the article

  • How to pass parameters to a function?

    - by sbi
    I need to process an SVN working copy in a PS script, but I have trouble passing arguments to functions. Here's what I have: function foo($arg1, $arg2) { echo $arg1 echo $arg2.FullName } echo "0: $($args[0])" echo "1: $($args[1])" $items = get-childitem $args[1] $items | foreach-object -process {foo $args[0] $_} I want to pass $arg[0] as $arg1 to foo, and $arg[1] as $arg2. However, it doesn't work, for some reason $arg1 is always empty: PS C:\Users\sbi> .\test.ps1 blah .\Dropbox 0: blah 1: .\Dropbox C:\Users\sbi\Dropbox\Photos C:\Users\sbi\Dropbox\Public C:\Users\sbi\Dropbox\sbi PS C:\Users\sbi> Note: The "blah"parameter isn't passed as $arg1. I am absolutely sure this is something hilariously simple (I only just started with doing PS and still feel very clumsy), but I have banged my head against this for more than an hour now, and I can't find anything.

    Read the article

  • CHROOT for shell script testing

    - by Josh
    I am looking at setting up a shell script in order to properly document and automate the process I am using to setup a few servers we have. In order to test the shell script through its different stages I was thinking a CHROOT would be ideal, since I can wipe out the "virtual root" and create it on the fly. I have never used CHROOT before, however. I was just curious what are the exact steps I would need to follow to implement this process of creating a chroot (with the basic core functions that would be needed to install apache/php/etc.)? and then destroying it?

    Read the article

  • Extract first page from multiple pdfs

    - by Tim Alexander
    Have got about 500 PDFs to go through and extract the first page of. They then need to go through some time consuming conversion process so was hoping to try and save some time by have a batch process to extract just the first page from the 500 pdfs and place it in a new pdf. Have had a poke around Acrobat but can find no real method of doing this for multiple files. Does anyone know any other programs or methods that this could be achieved? Free and open source are obviously more favourable :) EDIT: Have actually had some success using GhostScript to extract just one page. Am now looking at how to batch that and take the list of files and use those.

    Read the article

  • The balance between client and server functionality

    - by Eugen Martynov
    I want to bring the discussion that started in our teams and get your opinion about it. Assume we have an user account which could have different credentials for authentication and associated email to recover. An user has possibility to do signup with an email or use his social profile to complete signup process. As an Rest API from the backend to client looks like: Create account Authorise Update user data Link social account Register email Verify email In addition our BE is distributed and divided between several services/servers/clusters. So different calls are related to different end points. In case of the social sign up some of steps should be skipped or simplified. For example, with Facebook signup we could already skip email registration and verification step (we ask email permission form user), linking the social account and pre-fill user displayed name. So we proposed to have another end point which will hide/combine different calls on BE and return whole process result to the clients. The pros for this approach: No more duplication of functionality between clients Speed up the networking and user experience The cons for this approach: Additional work for backend Probably most complex scenarios in future updates I would like to get your opinion or experience with this situation. Especially if you already experienced point #2 from against reasons.

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >