Search Results

Search found 22447 results on 898 pages for 'cpu load'.

Page 457/898 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • android thread management onPause

    - by Kwan Cheng
    I have a class that extends the Thread class and has its run method implemented as so. public void run(){ while(!terminate){ if(paused){ Thread.yield(); }else{ accummulator++; } } } This thread is spawned from the onCreate method. When my UI is hidden (when the Home key is pressed) my onPause method will set the paused flag to true and yield the tread. However in the DDMS I still see the uTime of the thread accumulate and its state as "running". So my question is. What is the proper way to stop the thread so that it does not use up CPU time?

    Read the article

  • Why exactly is server side HTML rendering faster than client side?

    - by mvbl fst
    I am working on a large web site, and we're moving a lot of functionality to the client side (Require.js, Backbone and Handlebars stack). There are even discussions about possibly moving all rendering to the client side. But reading some articles, especially ones about Twitter moving away from client side rendering, which mention that server side is faster / more reliable, I begin to have questions. I don't understand how rendering fairly simple HTML widgets in JS from JSON and templates is a contemporary browser on a dual core CPU with 4-8 GB RAM is any slower than making dozens of includes in your server side app. Are there any actual real life benchmarking figures regarding this? Also, it seems like parsing HTML templates by server side templating engines can't be any faster than rendering same HTML code from a Handlebars template, especially if this is a precomp JS function?

    Read the article

  • Faking a Single Address Space

    - by dsimcha
    I have a large scientific computing task that parallelizes very well with SMP, but at too fine grained a level to be easily parallelized via explicit message passing. I'd like to parallelize it across address spaces and physical machines. Is it feasible to create a scheduler that would parallelize already multithreaded code across multiple physical computers under the following conditions: The code is already multithreaded and can scale pretty well on SMP configurations. The fact that not all of the threads are running in the same address space or on the same physical machine must be transparent to the program, even if this comes at a significant performance penalty in some use cases. You may assume that all of the physical machines involved are running operating systems and CPU architectures that are binary compatible. Things like locks and atomic operations may be slow (having network latency to deal with and all) but must "just work".

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • How can I write faster JavaScript?

    - by a paid nerd
    I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot. Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it: Limit function calls When possible, use arrays instead of objects and properties Use variables for math operation results as much as possible Cache common math operations such as Math.PI / 180 Use sin and cos approximation functions instead of Math.sin() and Math.cos() Reuse objects when passing around data instead of creating new ones Replace Math.abs() with ~~ Study jsperf.com until my eyes bleed Use a preprocessor on my JavaScript to do some of the above operations

    Read the article

  • How to read system information in C++ on Windows and Linux?

    - by f4
    I need to read system information like CPU/RAM/disks usage in C++. Maybe swap, network and process too but that's less important. It has probably been done thousand of times before so I first tried to search for a library. Someone here suggested SIGAR, which seems to fit my needs but it has a GPL license and it is for inclusion in a proprietary product. So it's not an option here. I feel like it's something not that easy to implement, as it'll need testing on several platforms. So a library would be welcome. If you don't know of any library, could you point me in the right direction for both platforms?

    Read the article

  • How do I declare a C# Web User Control but stop it from initializing?

    - by Scott Stafford
    I have a C#/ASP.NET .aspx page that declares two controls that each represents the content of one tab. I want a query string argument (e.g., ?tab=1) to determine which of the two controls is activated. My problem is, they both go through the initialization events and populate their child controls, wasting CPU resources and slowing the response time. Is it possible to deactivate them somehow so they don't go through any initialization? My .aspx page looks like this: <% if (TabId == 0) { %> <my:usercontroltabone id="ctrl1" runat="server" /> <% } else if (TabId == 1) { %> <my:usercontroltabtwo id="ctrl2" runat="server" /> <% } %> And that part works fine. I assumed the that <%'s would have meant the control wouldn't actually be declared and so wouldn't initialize, but that isn't so...

    Read the article

  • How can two programs talk to each other in Java?

    - by Arnon
    I want to ?reduce? the CPU usage/ROM usage/RAM usage - generally?, all system resources that my app uses - who doesn't? :) For this reason I want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which means it should call the update method (that i wrote) that found in the Main class. How can I call the update method in the Main program from the preferences program? To put it another way, is a way to build preferences window that take system resources just when the window appears? Is this approach - of separating programs and let them talk to each other (somehow) - the right approach for speeding up my programs?

    Read the article

  • Endless saving of CoreData Context

    - by Robert
    Sometimes I noticed that a 'save:' operation an a ManagedObjectContext never returns and consumes 100% CPU. I'm using an SQL Store in a GarbageCollected environment (Mac OS X 10.6.3). The disk activity shows about 700 KB/s writing. While having a look at the folder that contains the sqlite database file the "-journal" file appears and disappears, appears and disappears, ... This is part of the call graph from the process analysis: 2203 -[NSManagedObjectContext save:] 1899 -[NSPersistentStoreCoordinator(_NSInternalMethods) executeRequest:withContext:] 1836 -[NSSQLCore executeRequest:withContext:] 1836 -[NSSQLCore saveChanges:] 1479 -[NSSQLCore performChanges] ... 335 -[NSSQLCore recordChangesInContext:] ... 20 -[NSSQLCore rollbackChanges] ... 2 -[NSSQLCore prepareForSave:] ... 62 -[NSPersistentStoreCoordinator(_NSInternalMethods) _checkRequestForStore:originalRequest:andOptimisticLocking:] ... 1 -[NSPersistentStore(_NSInternalMethods) _preflightCrossCheck] ... 184 -[NSMergePolicy resolveConflicts:] ... 120 -[NSManagedObjectContext(_NSInternalChangeProcessing) _prepareForPushChanges:] ... Everything a happening in the main GUI thread. Any ideas what I can to do to resolve the problem?

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • C#. Struct design. Why 16 byte is recommended size?

    - by maxima120
    I read Cwalina book (recommendations on development and design of .NET apps). He says that good designed struct has to be less than 16 bytes in size (for performance purpose). My questions is - why exactly is this? And (more important) can I have larger struct with same efficiency if I run my .NET 3.5 (soon to be .NET 4.0) 64-bit application on i7 under Win7 x64 (is this limitation CPU / OS based)? Just to stress again - I need as efficient struct as it is possible. I try to keep it in stack all the time, the application is heavily multi-threaded and runs on sub-millisecond intervals, the current size of the struct is 64 byte.

    Read the article

  • openoffice document (odt) to PDF with commad line on Linux?

    - by Data-Base
    Hi, we are building a PHP script that we need at work to create reports in PDFs the reports will be created by using templates from postgrSQL. so far I found that it can be done with the use of php and odt (openoffice) files [http://www.odtphp.com/] (do you have any other suggestions?) now how I can convert the results to PDF so teachers will get the final reports as PDF any tips? the server has no GUI and I want to make it as simple as possible we tried using PHP to PDF directly with FPDF [http://www.fpdf.org/] but it is really a CPU killer! cheers

    Read the article

  • CUDA Global Memory, Where is it?

    - by gamerx
    I understand that in CUDA's memory hierachy, we have things like shared memory, texture memory, constant memory, registers and of course the global memory which we allocate using cudaMalloc(). I've been searching through whatever documentations I can find but I have yet to come across any that explicitly explains what is the global memory. I believe that the global memory allocated is on the GDDR of graphics card itself and not the RAM that is shared with the CPU since one of the documentations did state that the pointer cannot be dereferenced by the host side. Am I right?

    Read the article

  • What is the cost of memory access?

    - by Jurily
    We like to think that a memory access is fast and constant, but on modern architectures/OSes, that's not necessarily true. Consider the following C code: int i = 34; int *p = &i; // do something that may or may not involve i and p {...} // 3 days later: *p = 643; What is the estimated cost of this last assignment in CPU instructions, if i is in L1 cache, i is in L2 cache, i is in L3 cache, i is in RAM proper, i is paged out to an SSD disk, i is paged out to a traditional disk? Where else can i be? Of course the numbers are not absolute, but I'm only interested in orders of magnitude. I tried searching the webs, but Google did not bless me this time.

    Read the article

  • Disable update on battery percentage

    - by Kris B
    I have a service that performs background updates. I want to give the user the the option to disable the updates when their battery percentage reaches a certain level. From my research, I'm going to use a receiver in the onCreate method of my Service class, eg: public class MainService extends Service { @Override public void onCreate() { this.registerReceiver(this.BatInfoReceiver, new IntentFilter(Intent.ACTION_BATTERY_CHANGED)); } private BroadcastReceiver BatInfoReceiver = new BroadcastReceiver(){ @Override public void onReceive(Context arg0, Intent intent) { int level = intent.getIntExtra("level", 0); } }; } I'm assuming the best practice is to leave the service running and check the battery level in the service and not perform the CPU intensive code based on the percentage? I don't actually stop the service itself and start it up again, based on the battery percentage?

    Read the article

  • Why use threading data race will occur, but will not use gevent

    - by onlytiancai
    My test code is as follows, using threading, count is not 5,000,000 , so there has been data race, but using gevent, count is 5,000,000, there was no data race . Is not gevent coroutine execution will atom "count + = 1", rather than split into a one CPU instruction to execute? # -*- coding: utf-8 -*- import threading use_gevent = True use_debug = False cycles_count = 100*10000 if use_gevent: from gevent import monkey monkey.patch_thread() count = 0 class Counter(threading.Thread): def __init__(self, name): self.thread_name = name super(Counter, self).__init__(name=name) def run(self): global count for i in xrange(cycles_count): if use_debug: print '%s:%s' % (self.thread_name, count) count = count + 1 counters = [Counter('thread:%s' % i) for i in range(5)] for counter in counters: counter.start() for counter in counters: counter.join() print 'count=%s' % count

    Read the article

  • Through Java, make a call to Javascript functions on networked device?

    - by stjowa
    I am doing device monitoring on a networked system. I need to know how to make Javascript calls on that device via its IP address to get certain status information (this device's status is only available through Javascript APIs, not SNMP, etc). I am working in Java. ADDED: The specific device is an Amino set-top-box. It has what it calls JMACX: JavaScript Media Access Control Extensions API specification. It allows you within an HTML document to use that API to get MUCH information about the device (cpu usage, channel info, remote-control options, etc.). I need to get this information within a Java program for specific monitoring purposes. Perhaps possible with HTTP requests? Any input would be greatly appreciated. Thanks, Steve

    Read the article

  • Is there a way to determine the current Outlook activity level?

    - by dlittau
    I am working on an Outlook 2007 Add-in in C# (VS2008) and we want to send some background email items only when Outlook is not busy doing something else. Is there a .net way to see if Outlook is busy doing other things (perhaps due to other add-ins or the like taking up CPU cycles)? Alternately, is there a way to send emails such that they never appear in the Outbox? We need a method that would not require any additional software be installed. Thanks for any input.

    Read the article

  • Loop function works first time, not second time

    - by user1483101
    I'm creating a parsing program to look for certain strings in a a text file and count them. However, I'm having some trouble with one spot. def callbrowse(): filename = tkFileDialog.askopenfilename(filetypes = (("Text files", "*.txt"),("HTML files", ".html;*.htm"),("All files", "*.*"))) print filename try: global filex global writefile filex = open(filename, 'r') print "Success!!" print filename except: print "Failed to open file" ######This returns the correct count only the first time it is run. The next time it ######returns 0. If the browse button is clicked again, then this function returns the ######correct count again. def count_errors(error_name): count = 0 for line in filex: if error_name == "CPU > 79%": stringparse = "Utilization is above" elif error_name == "Stuck touchscreen": stringparse = "Stuck touchscreen" if re.match("(.*)" + "Utilization is above" + "(.*)",line): count = count + 1 return count Thanks for any help. I can't seem to get this to work right.

    Read the article

  • SqlAlchemy hangs after adding record in MS SQL

    - by Patrick
    I'm running SQLAlchemy on Jython and trying to connect to a MS SQL database using jTDS with windows authentication. I can query and delete just fine but when I try to insert new values it will hang when I commit. int 'before add' session.add(newVal) print 'after add' session.commit() print 'after commit' I see the first two print statements but not the last. My CPU maxes out and I can't even query the table directly using the MS SQL Management Studio. When I kill the Jython java process I can query again but the new values haven't been added. Strangely enough I can insert values directly using an SQL command: insert_sql = "INSERT INTO my_table (my_value) VALUES ('test_value')" session.execute(insert_sql) session.commit() Any ideas what I'm doing wrong?

    Read the article

  • C/C++ function definitions without assembly

    - by Jack
    Hi, I always thought that functions like printf() are in the last step defined using inline assembly. That deep into stdio.h is burried some asm code that actually tells CPU what to do. Something like in dos, first mov bagining of the string to some memory location or register and than call some int. But since x64 version of Visual Studio doesent support inline assembler at all, it made me think that there are really no assembler-defined functions in C/C++. So, please, how is for example printf() defined in C/C++ without using assembler code? What actually executes the right software interrupt? Thanks.

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • 32/64 Bit Question

    - by user48408
    Here's my question. What is the best way to determine what bit architecture your app is running on? What I am looking to do: On a 64 bit server I want my app to read 64 bit datasources (stored in reg key Software\Wow6432Node\ODBC\ODBC.INI\ODBC Data Sources) and if its 32 bit I want to read 32 bit datasources, (i.e. Read from Software\ODBC\ODBC.INI\ODBC Data Sources). I might be missing the point, but I don't want to care what mode my app is running in. I simply want to know if the OS is 32 or 64 bit. [System.Environment.OSVersion.Platform doesn't seem to be cutting it for me. Its returning Win32NT on my local xp machine and on a win2k8 64 bit server (even when all my projects are set to target 'any cpu')]

    Read the article

  • Can JPA do batch update | put | write | insert as pm.makePersistentAll() does in GAE/J

    - by Kenyth
    I searched through multiple discussions here. Can someone just give me a quick and direct answer? And if with JPA you can't do a batch update, what if I don't use transaction, and just use the following flow: em = emf.getEntityManager // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) ... em.close() How does this compare to batch update with regard to performance, and compare to a single transaction commit, measured by RPC calls to datastore server, CPU cycles per request, or so. Does every call to em.persist(..) before em.close() trigger a RPC call to the datastore server? Thanks very much for any response!

    Read the article

  • Mesos slave not 'Running' multiple executors simultaneously

    - by user3084164
    I am using Mesos to distribute a bunch of tasks to different machines (mesos-slaves). Here is what happens: 1. My scheduler gets resource offers and accepts it. 2. Mesos stages multiple executors on the same mesos-slaves (each slave has 4 cpus) 3. Only ONE executor enters the 'Running' state on each of the slaves while the others are shown in 'Staging' state. 4. Only after the current executor finishes execution the other executor starts running. Given that I have 4 CPUs on each machine, shouldn't each slave be running 4 executors simultaneously? Each executor requires 1 CPU.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >