Search Results

Search found 5597 results on 224 pages for 'worker processes'.

Page 150/224 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • Monitor RAM usage on CentOS and restart Apache at a certain usage

    - by Chris
    Hi, I'm running a CentOS 5.3 server with a basic LAMP stack. I've optimized LAMP and my code to run efficiently as possible, but Apache has a memory leak somewhere that kills my server every hour or so. What is the best way to write a script that will monitor the memory usage and if it peaks over, say, 450MB kill all the Apache processes and restart Apache. I know C++/PHP and basic Linux server administration but I'm not familiar with Perl or bash scripting. I'd be open to learn any solutions, though, as a temporary solution while I find the issue.

    Read the article

  • android content provider robustness on provider crash

    - by user1298992
    On android platforms (confirmed on ICS), if a content provider dies while a client is in the middle of a query (i.e. has a open cursor) the framework decides to kill the client processes holding a open cursor. Here is a logcat output when i tried this with a download manager query that sleeps after doing a query. The "sleep" was to reproduce the problem. you can imagine it happening in a regular use case when the provider dies at the right/wrong time. And then do a kill of com.android.media (which hosts the downloadProvider). "Killing com.example (pid 12234) because provider com.android.providers.downloads.DownloadProvider is in dying process android.process.media" I tracked the code for this in ActivityManagerService::removeDyingProviderLocked Is this a policy decision or is the cursor access unsafe after the provider has died? It looks like the client cursor is holding a fd for an ashmem location populated by the CP. Is this the reason the clients are killed instead of throwing an exception like Binders when the server (provider) dies ?

    Read the article

  • Enumerating all open file handles and/or registry handles in Windows Mobile / Windows CE 5.x

    - by jdstroy
    Hi all, Is there a way to enumerate all open file handles and/or registry handles in Windows Mobile 5 / Windows CE 5.x? In particular, I'd like to get the handles for all processes in the system, and not just the ones for my application. This would be similar to the list of handles in Sysinternals's Process Explorer for Win32 or Sysinternals's handle.exe I anticipate that someone will ask "Is this absolutely necessary for your application?" My answer to that would be "I think so, unless there's a better way to get a list of all open file names and registry key names." The goal is to provide diagnostic information about an application that crashes and fails to uninstall properly, but that worked properly at one time on the same device. (I do not have debugging information for the buggy application.)

    Read the article

  • How does 'lazy' work?

    - by Matt Fenwick
    What is the difference between these two functions? I see that lazy is intended to be lazy, but I don't understand how that is accomplished. -- | Identity function. id :: a -> a id x = x -- | The call '(lazy e)' means the same as 'e', but 'lazy' has a -- magical strictness property: it is lazy in its first argument, -- even though its semantics is strict. lazy :: a -> a lazy x = x -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.lhs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. -- Result: happiness Tracking down the note in MkId.lhs (hopefully this is the right note and version, sorry if it's not): Note [lazyId magic] ~~~~~~~~~~~~~~~~~~~ lazy :: forall a?. a? -> a? (i.e. works for unboxed types too) Used to lazify pseq: pseq a b = a `seq` lazy b Also, no strictness: by being a built-in Id, all the info about lazyId comes from here, not from GHC.Base.hi. This is important, because the strictness analyser will spot it as strict! Also no unfolding in lazyId: it gets "inlined" by a HACK in CorePrep. It's very important to do this inlining after unfoldings are exposed in the interface file. Otherwise, the unfolding for (say) pseq in the interface file will not mention 'lazy', so if we inline 'pseq' we'll totally miss the very thing that 'lazy' was there for in the first place. See Trac #3259 for a real world example. lazyId is defined in GHC.Base, so we don't have to inline it. If it appears un-applied, we'll end up just calling it. I don't understand that because it refers to lazyId instead of lazy. How does lazy work?

    Read the article

  • Named pipe is not flushing in Python

    - by BrainCore
    I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes?

    Read the article

  • Background processing in rails

    - by hashpipe
    Hi, This might seem like a FAQ on stackoverflow, but my requirements are a little different. While I have previously used BackgroundRB and DJ for running background processes in ruby, my requirement this time is to run some heavy analytics and mathematical computations on a huge set of data, and I need to do this only about the first 15 days of the month. Going by this, I am tempted to use cron and run a ruby script to accomplish this goal. What I would like to know / understand is: 1 - Is using cron a good idea (cause I'm not a system admin, and so while I have basic idea of cron, I'm not overly confident of doing it perfectly) 2 - Can we somehow modify DJ to run only on the first 15 days of the month (with / without using cron), and then just stop and exit once all the jobs in the queue for the day are over (don't want it to ping the DB every time for a new job...whatever the jobs will be in the queue when DJ starts, that will be all). I'm not sure if I have put the question in the right manner, but any help in this direction will be much appreciated. Thanks

    Read the article

  • Wordpress form handling

    - by Ron
    I need to add a basic form page in the website, that runs on WordPress framework. I have the following raw materials ready: Client side: html form layout,css and jquery validation code. Server side: form handler php function that processes the $_POST[] data. My problem is to integrate this code in the Wordpress framework. I have looked at some plugins but they are doing much more than I would like and also they have their own validation which is cumbersome to change. Could anyone suggest a good form plugin that allows just the framework hooks ? Or is it worthwhile that I should write the plugin myself. Thanks.

    Read the article

  • Communicating with the PC via USB connection in Android

    - by Danny Singh
    I'm fairly new to android programming and need some information for a 4th year forensics course project. Basically I am trying to create a suite of tools for live analysis of an android phone. I know how to get the information I need on the phone, but I was wondering if there was a way to communicate that information back to the PC? I want to be able to run a program from a PC, which, when the phone is docked, will allow the user to access information about the phone (ie currently running services/processes, bluetooth/wifi connections, etc). I have a bunch of methods that will run on the phone and get all the information, but I want to be able to call those methods from the PC, execute on the phone, then have the information sent back to the PC to display to the user instead of just displaying it on the phone. This is to leave as small a footprint on the phone as possible. Thanks a lot.

    Read the article

  • Transitioning to Branching with TFS

    - by Rob
    Our team is currently using plain old TFS 2005, no branching, shared checkouts etc... I would like to introduce a DEV/MAIN/PROD branching system simillar to the basic flavor in the TFS Guidance document so that we can do some parallel dev, isolation, and firm up review and deployment processes. I have read most of the whitepapers etc. Do you guys have any practical advice, suggested tools, gotchas or reccomendations. Also, we plan to migrate to 2010 once it comes out - not sure if that would affect anything. I appreciate all the suggestions and help I can get as I am a branching neophyte. Thanks in advance.

    Read the article

  • pyinstaller: 2 instances of my cherrypy app exe get executed.

    - by d.c
    I have a cherrypy app that I've made an exe with pyinstaller. now when I run the exe it loads itself twice into memory. Watching the taskmanager shows the first instance load into about 1k, then a second later a second instance of hte exe loads into about 3k ram. If I close the bigger one both processes die. If I close hte smaller one only that one dies. Loading the exe with subprocess, if I try to proc.kill(), it only kills the small one leaving the other running in memory. Is this a sideeffect of using cherrypy and pyinstaller together?

    Read the article

  • How to make sure Solr/Lucene won't die with java.lang.OutOfMemoryError?

    - by taw
    I'm really puzzled why it keeps dying with java.lang.OutOfMemoryError during indexing even though it has a few GBs of memory. Is there a fundamental reason why it needs manual tweaking of config files / jvm parameters instead of it just figuring out how much memory is available and limiting itself to that? No other programs except Solr ever have this kind of problem. Yes, I can keep tweaking JVM heap size every time such crashes happen, but this is all so backwards. Here's stack trace of the latest such crash in case it is relevant: SEVERE: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.<init>(String.java:216) at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122) at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:169) at org.apache.lucene.search.FieldCacheImpl$StringIndexCache.createValue(FieldCacheImpl.java:701) at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:208) at org.apache.lucene.search.FieldCacheImpl.getStringIndex(FieldCacheImpl.java:676) at org.apache.lucene.search.FieldComparator$StringOrdValComparator.setNextReader(FieldComparator.java:667) at org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:94) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:245) at org.apache.lucene.search.Searcher.search(Searcher.java:171) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:988) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • MPI signal handling

    - by Seth Johnson
    When using mpirun, is it possible to catch signals (for example, the SIGINT generated by ^C) in the code being run? For example, I'm running a parallelized python code. I can except KeyboardInterrupt to catch those errors when running python blah.py by itself, but I can't when doing mpirun -np 1 python blah.py. Does anyone have a suggestion? Even finding how to catch signals in a C or C++ compiled program would be a helpful start. If I send a signal to the spawned Python processes, they can handle the signals properly; however, signals sent to the parent orterun process (i.e. from exceeding wall time on a cluster, or pressing control-C in a terminal) will kill everything immediately.

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • Is there a way to make sure a background process spawned by my program is killed when my process ter

    - by Davy8
    Basically the child process runs indefinitely until killed in the background, and I want to clean it up when my program terminates for any reason, i.e. via the Taskmanager. Currently I have a while (Process.GetProcessesByName("ParentProcess").Count() 0) loop and exit if the parent process isn't running, but it seems pretty brittle, and if I wanted it to work under debugger in Visual Studio I'd have to add "ParentProcess.vshost" or something. Is there any way to make sure that the child process end without requiring the child process to know about the parent process? I'd prefer a solution in managed code, but if there isn't one I can PInvoke. Edit: Passing the PID seems like a more robust solution, but for curiosity's sake, what if the child process was not my code but some exe that I have no control over? Is there a way to safeguard against possibly creating orphaned child processes?

    Read the article

  • Handling database failover for Rails applications on FreeBSD

    - by bianster
    I'm working on implementing database (Postgresql) failover for a Rails app that runs with Passenger/FreeBSD. Due to certain constraints regarding the server OS, it's necessary to continue using FreeBSD (as opposed to say, Ubuntu). I'm finding it to be quite a challenge to have failover handled within the Rails application, by way of a customised database adapter due to the fact that this application will be load-balanced between several webservers, and the multiple Rails processes that Passenger spawned in each webserver. I previously looked at setting up Pacemaker/Corosync to manage database server failover on a common IP but unfortunately I wasn't able to get past building the packages on FreeBSD. It does work rather well on Ubuntu 10.04 but I'm not likely to be able to use Ubuntu due to the OS constraints. I'm considering a custom witness daemon that simply pings the primary DB server, and this witness daemon switches all the webservers to the standby DB server when the primary becomes uncontactable (permanently/temporarily), to avoid split-brain. Though I would really like to know if there is a way to get Pacemaker(or something similar) to do the switch on FreeBSD.

    Read the article

  • General Drools Question

    - by El Guapo
    For the last few months my company has been using a product from a company called Informatica (previously AgentLogic) called RulePoint. This product has proven itself very easy to use with a well-developed and easy-to-use SDK for customization. The way we use the product for CEP is fairly trivial, we have 2 sources which we monitor for our rule data, the first being a JMS Queue, the second being a Jabber IM account. The product runs on any java-based application server (WebLogic, Tomcat, etc) and runs just about flawlessly. Last week my boss says, "Hey, I've heard that we may be able to do the same thing we are doing with RulePoint with an open-source product called Drools. Check it out and let me know what you think." I've heard of people using Drools for flow-based operations (validation, etc), however, I've never heard of anyone using their CEP product (Fusion) in practice. So, being the diligent worker, I have undertaken this task. I've downloaded all the files (version 5.0) and accompanying documentation and have started to read. I've read through just about all the docs and run most of the examples, but I still don't really see HOW drools works for CEP. While there are examples for using Data (or Facts, I guess) from JMS, I don't see how this thing stays "running", continuously monitoring a queue until the application is actually stopped. RulePoint pretty must just sits and listens, however, Drools seems to not. I could probably write a full-blown command-line application for our needs, however, I was hoping to leverage some of the benefits of using a application server provides. I guess I'm looking for some good tutorials or an example of how someone is using Drools and CEP in production. Thanks in advanced for any information, advice you may be able to provide.

    Read the article

  • Running another ruby script from a ruby script

    - by Andrew Grimm
    In ruby, is it possible to specify to call another ruby script using the same ruby interpreter as the original script is being run by? For example, if a.rb runs b.rb a couple of times, is it possible to replace system("ruby", "b.rb", "foo", "bar") with something like run_ruby("b.rb", "foo", "bar") so that if you used ruby1.9.1 a.rb on the original, ruby1.9.1 would be used on b.rb, but if you just used ruby a.rb on the original, ruby would be used on b.rb? I'd prefer not to use shebangs, as I'd like it to be able to run on different computers, some of which don't have /usr/bin/env. Edit: I didn't mean load or require and the like, but spawning new processes (so I can use multiple CPUs).

    Read the article

  • ETW tracking from .net, user mode and driver

    - by Jack Juiceson
    Hi everyone, We have an application that parts of it are in .net, c++ usermode and C++ drivers. The application is divided into several executables that run on demand and communication with each other using LPC(the processes run in different sessions(winlogon)). Currently We have a home written logging service to which .net and c++ usermode communicate by sending LPC messages. The driver uses DbgPrint and is not always enabled, as it causes the code to run 30% slower(we have lots of logging). I want to have all the logs written in one place and preferably not writing the logger myself(I love log4cpp and log4net). The requirement is to write from all the executables and drivers into one place and to have minimal overhead. I have read that ETW is way to go, however I wasn't able to find already written logger that uses it like log4cpp or log4net. So basically my questions is, do you know if there is already implemented ETW appender for log4cpp and log4net I can use ?

    Read the article

  • How can I make PowerShell run the original Start-Process cmdlet rather than the PSCX Start-Process c

    - by urig
    I have PowerShell v2.0 installed and on top of that, PSCX is installed. PSCX is the PowerShell Community Extensions (http://pscx.codeplex.com/Wikipage). It seems that I have two cmdlets called Start-Process installed and I'm guessing one is the original and the other is from PSCX. When I invoke Start-Process, the PSCX cmdlet is made to run. How can I make PowerShell run the original version instead? Helpful Evidence: When I run get-help start-process i get: Name Category Synopsis ---- -------- -------- Start-Process Cmdlet PSCX Cmdlet: Starts a new process. Start-Process Cmdlet Starts one or more processes on the local computer. When I run get-command start-process I get: CommandType Name Definition ----------- ---- ---------- Cmdlet Start-Process Start-Process [[-Path] <String>] [[-Arguments] <String>] [...

    Read the article

  • Entity Framework lazy loading doesn't work from other thread

    - by Thomas Levesque
    Hi, I just found out that lazy loading in Entity Framework only works from the thread that created the ObjectContext. To illustrate the problem, I did a simple test, with a simple model containing just 2 entities : Person and Address. Here's the code : private static void TestSingleThread() { using (var context = new TestDBContext()) { foreach (var p in context.Person) { Console.WriteLine("{0} lives in {1}.", p.Name, p.Address.City); } } } private static void TestMultiThread() { using (var context = new TestDBContext()) { foreach (var p in context.Person) { Person p2 = p; // to avoid capturing the loop variable ThreadPool.QueueUserWorkItem( arg => { Console.WriteLine("{0} lives in {1}.", p2.Name, p2.Address.City); }); } } } The TestSingleThread method works fine, the Address property is lazily loaded. But in TestMultiThread, I get a NullReferenceException on p2.Address.City, because p2.Address is null. It that a bug ? Is this the way it's supposed to work ? If so, is there any documentation mentioning it ? I couldn't find anything on the subject on MSDN or Google... And more importantly, is there a workaround ? (other than explicitly calling LoadProperty from the worker thread...) Any help would be very appreciated PS: I'm using VS2010, so it's EF 4.0. I don't know if it was the same in the previous version of EF...

    Read the article

  • process thread scheduling

    - by arvind
    I have the following query regarding the scheduling of process threads. a) If my process A has 3 threads then can these threads be scheduled concurrently on the different CPUs in SMP m/c or they will be given time slice on the same cpu. b) Suppose I have two processes A with 3 threads and Process B with 2 threads (all threads are of same priority) then cpu time allocated to each thread (time slice) is dependent on the number of threads in the process or not? Correct me if I am wrong is it so that cpu time is allocated to process which is then shared among its threads i.e. time slice given to process A threads is less than that of Process B threads.

    Read the article

  • Managing Cisco programatically; Telnet vs SNMP?

    - by MikeHerrera
    I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis. I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable. My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself. Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP. I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?

    Read the article

  • How to transform phrases and words into MD5 hash?

    - by brilliant
    Can anyone, please, explain to me how to transform a phrase like "I want to buy some milk" into MD5? I read Wikipedia article on MD5, but the explanation given there is beyond my comprehension: "MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit little endian integers)" "sixteen 32-bit little endian integers" is already hard for me. I checked the article on little endians and didn't understand a bit. However, the examples of some phrases and their MD5 hashes are very nice: MD5("The quick brown fox jumps over the lazy dog") = 9e107d9d372bb6826bd81d3542a419d6 MD5("The quick brown fox jumps over the lazy dog.") = e4d909c290d0fb1ca068ffaddf22cbd0 Can anyone, please, explain to me how this MD5 algorithm works on some very simple example? And also, perhaps you know some software or a code that would transform phrases into their MD5. If yes, please, let me know.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >