Search Results

Search found 26093 results on 1044 pages for 'process monitor'.

Page 307/1044 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • CHROOT for shell script testing

    - by Josh
    I am looking at setting up a shell script in order to properly document and automate the process I am using to setup a few servers we have. In order to test the shell script through its different stages I was thinking a CHROOT would be ideal, since I can wipe out the "virtual root" and create it on the fly. I have never used CHROOT before, however. I was just curious what are the exact steps I would need to follow to implement this process of creating a chroot (with the basic core functions that would be needed to install apache/php/etc.)? and then destroying it?

    Read the article

  • The balance between client and server functionality

    - by Eugen Martynov
    I want to bring the discussion that started in our teams and get your opinion about it. Assume we have an user account which could have different credentials for authentication and associated email to recover. An user has possibility to do signup with an email or use his social profile to complete signup process. As an Rest API from the backend to client looks like: Create account Authorise Update user data Link social account Register email Verify email In addition our BE is distributed and divided between several services/servers/clusters. So different calls are related to different end points. In case of the social sign up some of steps should be skipped or simplified. For example, with Facebook signup we could already skip email registration and verification step (we ask email permission form user), linking the social account and pre-fill user displayed name. So we proposed to have another end point which will hide/combine different calls on BE and return whole process result to the clients. The pros for this approach: No more duplication of functionality between clients Speed up the networking and user experience The cons for this approach: Additional work for backend Probably most complex scenarios in future updates I would like to get your opinion or experience with this situation. Especially if you already experienced point #2 from against reasons.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Extract first page from multiple pdfs

    - by Tim Alexander
    Have got about 500 PDFs to go through and extract the first page of. They then need to go through some time consuming conversion process so was hoping to try and save some time by have a batch process to extract just the first page from the 500 pdfs and place it in a new pdf. Have had a poke around Acrobat but can find no real method of doing this for multiple files. Does anyone know any other programs or methods that this could be achieved? Free and open source are obviously more favourable :) EDIT: Have actually had some success using GhostScript to extract just one page. Am now looking at how to batch that and take the list of files and use those.

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Automatic document generation

    - by Bowler
    I have some data in an excel file from which I have to generate a report. I repeat this task fairly regularly and am looking to automate it. I have a LaTeX project into which I usually just copy data by hand, export the necessary worksheets as pdfs and add them to my LaTeX project and compile with pdflatex. It has occured to me that there must be a way to automate this process. Is there an efficient way to export the data from excel and into a LaTeX project, possibly a vba script in excel could run the process? Also, it doesn't have to be LaTeX, I'm not all that experienced with MS office's more advanced features is there some way akin to a mail merge that I could achieve this with? In some ways this might be better in case I have to pass the work on to someone who doesn't know LaTeX. Thanks.

    Read the article

  • mongod fork vs nohup

    - by Daniel Kitachewsky
    I'm currently writing process management software. One package we use is mongo. Is there any difference between launching mongo with mongod --fork --logpath=/my/path/mongo.log and nohup mongod >> /my/path/mongo.log 2>&1 < /dev/null & ? My first thought was that --fork could spawn more processes and/or threads, and I was suggested that --fork could be useful for changing the effective user (downgrading privileges). But we run all under the same user (process manager and mongod), so is there any other difference? Thank you

    Read the article

  • Why does my CPU Usage reach 100% too often?

    - by deathlock
    I'm using a dual-core processor and often see my CPU usage reaches 100%. I realize this may happen if I'm running too much applications, so when I know the computer starts to run slowly, I start to close my applications. I usually run 4-5 applications simultaneously. Usually those are: web browser (Google Chrome), Adobe Photoshop, Notepad++, XAMPP, and Windows Task Manager. Usually I close tabs in my Chrome first, because I often browse the net with about 20 tabs/4 windows open, so I presume that would take much memory (bad habit, I know). But even after closing Chrome's tabs or closing other applications, my CPU Usage often stays at high percentage - 72% at best, 100% at worst. I check the Processes tab on Windows Task Manager and usually found the System, System Idle Process, or services.exe taking the highest CPU process (could reach 60). Why is this happening? And is there any solution? EDIT I have T2250 @ 1,73 Ghz and 2.5 GB RAM

    Read the article

  • rdiff-backup failed due to target machine being down, but is unkillable

    - by Markus
    My backup script was invoked by cron, using rdiff-backup to backup the user files onto a target system in the network. That target computer went down at some point, yet still appeared as mounted on the server. rdiff-backup didn't do anything, but still appears as a process. kill-ing doesn't stop it. Similarly, running rdiff-backup for other directories works but doesn't exit properly and remains in the process list. Is there anything short of rebooting the server that I can try?

    Read the article

  • Window 2003 Server - Logon Failure error message in Event Viewer

    - by user45192
    Hi guys, I received alot of event logged in the event viewer with this message. I notice is always the same user id which encounters this error. The user id is use by an application to access the database. However, this account does not exits on this server. How do I trace the services/program use by this user id which causes these error messages? Reason=Unknown user name or bad password&&User Name=&&Domain=&&Logon Type=3&&Logon Process=NtLmSsp&&Authentication Package=NTLM&&Workstation Name=&&Caller User Name=-&&Caller Domain=-&&Caller Logon ID=-&&Caller Process ID=-&&Transited Services=-&&Source Network Address=-&&Source Port=-&&User=SYSTEM&&ComputerName=

    Read the article

  • Banshee doesn't like opening websites

    - by Allan
    I have come across two bugs (which will be added to launchpad if it's not resolved here) When I open any of the websites in Banshee Amazon or Miro Guide as soon as the site is finished loading it crashes Banshee. If I play any video local or remote it will show 1 frame maybe 0.5 sec of video then I get a black screen and audio continues in the backgound. Specs & Details I have a Fujitsu Amilo 1718 laptop with 2 gig of ram (original 1 gig) graphics is provided by ATI Radeon Xpress 200M (don't laugh it works with compiz....just) I have a link to the output of banshee --debug Here Don't have time to read? Here are the Highlights [2 Warn 11:52:34.814] Caught an exception - System.ArgumentNullException: Argument cannot be null. then abit later Debug info from gdb: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Operation not permitted. ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. ================================================================= Aborted Not music to my ears as you can expect. The version I am using is 1.9.4 from the daily ppa but these bugs happen in any version of banshee from 1.8.1 and up. So if any one has come across a fix for this problem please share!! additional info Both VLC and Miro work on my system so there isn't a system wide problem with video and I haven't mentioned mono so no trolling it will get voted down.

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • Running a Screen instance of a program as non-root

    - by user288467
    I've got a dedicated server (Ubuntu 12.04, no GUI) set up to launch an instance of McMyAdmin and attach it to a screen instance every time I reboot the hardware. I have the command saved to root's crontab as: @reboot cd /var/MC_SVR && screen -dmS McMyAdmin ./MCMA2_Linux_x86_64 Problem being, though, I have a user set up specifically for FTP access to the server files so I don't always have to SSH into the machine. Since the server is being started as a root process, all the files it makes are, obviously, set with root as the owner. So I chown'd all the files and set them to ftpuser. Now I'm stuck with trying to get the process to start as ftpuser. I've tried doing the following but to no avail: cd /var/MC_SVR && su ftpuser - -c 'screen -dmS McMyAdmin ./MCMA2_Linux_x86_64' I try this in terminal and I get no errors or anything (in fact I never get anything unless it's a syntax error from su), but there's no screen instance to access and so I can assume the server never starts. So, what am I doing wrong? Or am I just not accessing the screen instance correctly since it's (supposed) to be launched by another user?

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

  • IIS6 + PHP + FastCGI 500 Errors - Where to start looking?

    - by Bertvan
    I've set up IIS6 with FastCGI to use php-cgi.exe. I have some php websites by external parties, that I'm trying to run in a test environment. One of the websites just plain gives me a FastCGI Error Page. Question: Is there some way to enable logging somewhere so that I can get a bit more information on this problem? I have looked in Eventlog IIS Website log (c:\windows\system32\Logfiles) PHP log But no results, except the IIS Website log mentions a return of a 500 page. Question: Is there any other way to debug/check where things might be going wrong? Here is what the page looks like: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: The FastCGI process exited unexpectedly Error Number: -1073741571 (0xc00000fd). Error Description: Unknown Error HTTP Error 500 - Server Error. Internet Information Services (IIS)

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • Worker processes not starting in IIS 7.5. What should I check?

    - by locster
    I have a Windows 7 machine (Windows version 6.1.7601 SP1 Build 7601) with IIS installed. At some point the installation appears to have become 'corrupted' in some way, as any requests are now met with the message: Service Unavailable HTTP Error 503. The service is unavailable. In IIS manager IIS is started and the app pool I am using reports itself as 'Started', yet there is no w3wp.exe process listed in the process list in task manager (I am a local admin and have clicked the 'Show processes from all users' button. I have enabled logging for the web site (at default location of %SystemDrive%\inetpub\logs\LogFiles), but this folder is empty. I am assuming that this log output is written by w3wp.exe as it handles requests (no w3wp.exe, no log file?). Presumably there is another layer of request handling that is responsible for starting the worker processes, does thsi layer have log files I can check, and/or can I uninstall/re-install that layer? Thanks.

    Read the article

  • Microsoft Changes Developer Account Registration Requirements

    - by Tim Murphy
    Over the last couple of weeks I have noticed that Microsoft seems to have changed the requirements for Corporate accounts.  These requirements were not in effect when I originally setup the account for the company that I work for.  We also recently had our corporate account canceled without explanation and are in the process of working to get it reinstated.  This all seems to revolve around rules to increase confidence that in the producers of content.  They are now having Symantec validate a company based on legal documents. In the past there have been problems with getting credit cards accepted.  We have had to setup new Live IDs to satisfy whatever glitch the system had or unexplainable requirement.  I am hoping that in the time that has elapsed these problems have been resolved. In truth I can’t say that these new requirements weren’t always in place, but it is getting frustrating to help clients setup accounts.  I am encourage that they have taken steps to safeguard the consumer from Joe-fly-by-night, but they also need to make sure that the process doesn’t become so complex that it drives away companies from participating in the store.  We will have to keep an eye on this as things evolve. del.icio.us Tags: Windows Phone Development,Windows 8 Development,Windows Phone,Windows 8,Registration,Corporate Accounts

    Read the article

  • How do I read multiple lines from STDIN into a variable?

    - by The Wicked Flea
    I've been googling this question to no avail. I'm automating a build process here at work, and all I'm trying to do is get version numbers and a tiny description of the build which may be multi-line. The system this runs on is OSX 10.6.8. I've seen everything from using CAT to processing each line as necessary. I can't figure out what I should use and why. Attempts read -d '' versionNotes Results in garbled input if the user has to use the backspace key. Also there's no good way to terminate the input as ^D doesn't terminate and ^C just exits the process. read -d 'END' versionNotes Works... but still garbles the input if the backspace key is needed. while read versionNotes do echo " $versionNotes" >> "source/application.yml" done Doesn't properly end the input (because I'm too late to look up matching against an empty string).

    Read the article

  • Areas of support needed when attempting to roll out a new software system

    In general, I think most people tend to be resistant to new systems or even change because they fear the unknown. Change means that their normal routine will be interrupted until they can learn to conform to the new routine due to the fact that it has transformed to the old routine. In addition, the feeling of failure is also generates a resistance to change. Why would a worker want to move from a process that has worked successfully for them in the past? Their fears over shadow any benefits a change in a new system or business process will bring to their work life. Areas of support needed when attempting to roll out a new software system: Executive/Upper Management Support If there is no support from the top of an organization how will employees be supportive of the new system? Proper Training Employees need to train on a new system prior to its rollout. The more training employee’s receive on any new system will directly impact how comfortable they will be with the system and are more accepting of the change because they can see how the changes will benefit them. Employee Incentives One way to re-enforce the need for employees to use a new system is to offer incentives to ensure that the system will be used. Employee Discipline/Termination If employees are adamantly refusing to use the new system after several warnings then they need to be formally reprimanded.  If this does not work the employer is forced to replace the employees.

    Read the article

  • Incompleted ubuntu 12.04 install dual-boot xp

    - by Mike
    This weekend has been the 1st time i've tried to install ubuntu. On the initial install, (I am using a USB) the installation went all the way through and asked to restart when completed. I was not able to get grub to boot and kept going through windows. After some research I found some articles on updating/reinstalling grub, so I followed those. I finally got grub to load after a day but there was no windows option only the Ubuntu 12.04 which when I selected it only gave me a fatal error 17. I booted from the usb again and deleted the partitions and installed again. This time I got an error 15. I then booted through xp and downloaded the WUBI.exe and uninstalled ubuntu and reinstalled again. The installation went to the very end and then gave an error message (which I don't remember exactly what it said) something along the lines of checking my logs on my C drive. I then uninstalled ubuntu and removed the wubi.exe file and wiped my usb and did the download to the usb again. Booted through usb and ran the install process again. It again went through the install process but after creating username and password and hitting continue the installation dialogue box disappears and the mouse spinning wheel is displayed but I do not receive the prompt to restart. I can still access the side menu for ubuntu but the wheel keeps spinning. How to I get Ubuntu to install properly

    Read the article

  • Passenger 'premature end of script headers' error

    - by fatnic
    Hi. I really need help debugging an error I'm getting with Passenger on Apache. I've just made a fresh install of Ubuntu 10.4 and have Apache, Ruby and Passenger installed. I'm trying to run a simple rack app but keep getting this error in my Apache error.log [Tue Sep 28 05:54:41 2010] [error] [client 86.171.2.82] Premature end of script headers: The error then continues with The backend application (process 25574) did not send a valid HTTP response; instead, it sent nothing at all. It is possible that it has crashed; please check whether there are crashing bugs in this application. *** Exception NoMethodError in PhusionPassenger::Rack::ApplicationSpawner (undefined method `call' for nil:NilClass) (process 25574): I've tried older versions of passenger also but get the same error. Ubuntu 10.4 Apache 2.2.14 Ruby 1.9.2-p0 Passenger 2.2.15

    Read the article

  • What kinds of issues can one expect when changing a domain names registar? (3 questions)

    - by anonymous-one
    Assuming that there are no 'unusual' items that come up, what kind of disruptions can one expect when moving a domain between registrars? I understand some of the below may vary over registrars. But assuming both ends are large proficient registrars: a) Will the NS settings be mirrored? We use a dedicated dns service provider so we are not using the originating registrars name servers. All that we are concerned about is that the existing NS values are mirrored at the target registrar. b) Are incoming domain transfers automated on the target registrar end? Eg: If we begin the transfer process during business hours at the source registrar, will someone have to manually approve the inbound transfer (most likely during their business hours) at the target registrar? c) Is the domain ever 'in limbo'? At any time in the process is there ever a time when the NS values for the domain are not populated (as they were prior to initiating the transfer) OR one does not have access to populate them (at the target registrar)? Thank you kindly for the help.

    Read the article

  • Creating a Scheduled Task that runs forever on Windows XP

    - by Mike Fiedler
    When I create a scheduled task, I do so via command line: schtasks.exe /Create /TN "startup-script" /TR "C:\startup.bat" /RU taskuser /RP taskpasswd /SC ONLOGON The idea is that this task run forever. The batch opens a java process that is never meant to end. I've used ONLOGON, as the machine auto-logs in as taskuser. All this works fine, for about 72 hours, after which the Duration flag kicks in and ends the process. Windows XP doesn't have the /DU flag on command line - is there an alternative method to creating a task that is meant to run from a system startup (doesn't even require logon) and runs forever, without touching a GUI?

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >