Search Results

Search found 6387 results on 256 pages for 'cpu allocation'.

Page 200/256 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • extension file "curl" is must be loaded

    - by Sharvan
    Using XAMPP 1.6.7 I installed the community version of Magento. But there seems to be a problem. I am getting the error message 'extension file "curl" is must be loaded'. In another computer, everything seems fine. (the other computer) intel(R) Pentium(R) Dual CPU, E2140 @ 1.60Hz, 1.60 GHz. 504 MB of RAM and XP professional 2002 sp2 My computer is less powerful (Inet Pentium 4 1.6 GHz. with sp2.) Please help me, thanks.

    Read the article

  • why it is up to the compiler to decide what value to assign when assigning an out-of-range value to

    - by Allopen
    in C++ Primer 4th edition 2.1.1, it says "when assigning an out-of-range value to a signed type, it is up to the compiler to decide what value to assign". I can't understand it. I mean, if you have code like "char 5 = 299", certainly the compiler will generate asm code like "mov BYTE PTR _sc$[ebp], 43"(VC) or "movb $43, -2(%ebp)"(gcc+mingw), it IS decided by the compiler. but what if we assign a value that is given by the user input? like, via command line? and the asm code generated will be "movb %al, -1(%ebp)"(gcc+mingw) and " mov cl, BYTE PTR _i$[ebp] mov BYTE PTR _sc$[ebp], cl "(VC), so now how can compiler decide what will happen? I think now it is decided by the CPU. Can you give me a clear explanation?

    Read the article

  • Why exactly is server side HTML rendering faster than client side?

    - by mvbl fst
    I am working on a large web site, and we're moving a lot of functionality to the client side (Require.js, Backbone and Handlebars stack). There are even discussions about possibly moving all rendering to the client side. But reading some articles, especially ones about Twitter moving away from client side rendering, which mention that server side is faster / more reliable, I begin to have questions. I don't understand how rendering fairly simple HTML widgets in JS from JSON and templates is a contemporary browser on a dual core CPU with 4-8 GB RAM is any slower than making dozens of includes in your server side app. Are there any actual real life benchmarking figures regarding this? Also, it seems like parsing HTML templates by server side templating engines can't be any faster than rendering same HTML code from a Handlebars template, especially if this is a precomp JS function?

    Read the article

  • How to read system information in C++ on Windows and Linux?

    - by f4
    I need to read system information like CPU/RAM/disks usage in C++. Maybe swap, network and process too but that's less important. It has probably been done thousand of times before so I first tried to search for a library. Someone here suggested SIGAR, which seems to fit my needs but it has a GPL license and it is for inclusion in a proprietary product. So it's not an option here. I feel like it's something not that easy to implement, as it'll need testing on several platforms. So a library would be welcome. If you don't know of any library, could you point me in the right direction for both platforms?

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • Faking a Single Address Space

    - by dsimcha
    I have a large scientific computing task that parallelizes very well with SMP, but at too fine grained a level to be easily parallelized via explicit message passing. I'd like to parallelize it across address spaces and physical machines. Is it feasible to create a scheduler that would parallelize already multithreaded code across multiple physical computers under the following conditions: The code is already multithreaded and can scale pretty well on SMP configurations. The fact that not all of the threads are running in the same address space or on the same physical machine must be transparent to the program, even if this comes at a significant performance penalty in some use cases. You may assume that all of the physical machines involved are running operating systems and CPU architectures that are binary compatible. Things like locks and atomic operations may be slow (having network latency to deal with and all) but must "just work".

    Read the article

  • MS DOS function like ipconfig to get system performance specs?

    - by JustADude
    I am aware of MSINFO32, but I'm wondering if there is a MS DOS command similar to ipconfig in order to get system specifications? I would like for the system specifications to be displayed in the MS DOS prompt. I would like to see at least: CPU RAM BUS speed Thanks for any insights. Edit: I am unable to install any other software, so just have to use existing DOS programming commands to extract this information. Thank you again. 2nd Edit: Whoops. Using Windows XP and Windows Vista.

    Read the article

  • Endless saving of CoreData Context

    - by Robert
    Sometimes I noticed that a 'save:' operation an a ManagedObjectContext never returns and consumes 100% CPU. I'm using an SQL Store in a GarbageCollected environment (Mac OS X 10.6.3). The disk activity shows about 700 KB/s writing. While having a look at the folder that contains the sqlite database file the "-journal" file appears and disappears, appears and disappears, ... This is part of the call graph from the process analysis: 2203 -[NSManagedObjectContext save:] 1899 -[NSPersistentStoreCoordinator(_NSInternalMethods) executeRequest:withContext:] 1836 -[NSSQLCore executeRequest:withContext:] 1836 -[NSSQLCore saveChanges:] 1479 -[NSSQLCore performChanges] ... 335 -[NSSQLCore recordChangesInContext:] ... 20 -[NSSQLCore rollbackChanges] ... 2 -[NSSQLCore prepareForSave:] ... 62 -[NSPersistentStoreCoordinator(_NSInternalMethods) _checkRequestForStore:originalRequest:andOptimisticLocking:] ... 1 -[NSPersistentStore(_NSInternalMethods) _preflightCrossCheck] ... 184 -[NSMergePolicy resolveConflicts:] ... 120 -[NSManagedObjectContext(_NSInternalChangeProcessing) _prepareForPushChanges:] ... Everything a happening in the main GUI thread. Any ideas what I can to do to resolve the problem?

    Read the article

  • How can two programs talk to each other in Java?

    - by Arnon
    I want to ?reduce? the CPU usage/ROM usage/RAM usage - generally?, all system resources that my app uses - who doesn't? :) For this reason I want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which means it should call the update method (that i wrote) that found in the Main class. How can I call the update method in the Main program from the preferences program? To put it another way, is a way to build preferences window that take system resources just when the window appears? Is this approach - of separating programs and let them talk to each other (somehow) - the right approach for speeding up my programs?

    Read the article

  • android thread management onPause

    - by Kwan Cheng
    I have a class that extends the Thread class and has its run method implemented as so. public void run(){ while(!terminate){ if(paused){ Thread.yield(); }else{ accummulator++; } } } This thread is spawned from the onCreate method. When my UI is hidden (when the Home key is pressed) my onPause method will set the paused flag to true and yield the tread. However in the DDMS I still see the uTime of the thread accumulate and its state as "running". So my question is. What is the proper way to stop the thread so that it does not use up CPU time?

    Read the article

  • How do I get MSBuild Task to generate XML Documents when building a solution?

    - by toba303
    I have a solution with lots of projects. Each project is configured to generate the XML documentation file when building in Debug-Mode (which is default). That works when I build in Visual Studio 2008. In my build script on my integration server I advise MSBuild to build the whole solution, but it won't generate the documentation files. What can I do? I already tried to explicitly give the Debug-Condition to the build process, but it makes no difference. <Target Name="BuilSolution"> <MSBuild Projects="C:\Path\To\MySolution.sln" targets="Build" Properties="SolutionConfigurationPlatforms='Debug|Any CPU'"/> </Target> There seem to be some ideas to solve this problem when building single projects, but I can't afford to do this, so I need a hint for doing it this way. Thanks in advance!

    Read the article

  • Darwin kernel architecture and OS X, 64bit on 32bit kernel, how does this work?

    - by overscore
    The OS X Lion (10.7) OS runs on mostly 64-bit binaries as reported by Activity Monitor. Given this, and the fact that my laptop runs a 32-bit version of the EFI and thus also a 32-bit kernel, how does the arch mixing work in general? Darwin Kernel Version 11.3.0: Thu Jan 12 18:48:32 PST 2012; root:xnu-1699.24.23~1/RELEASE_I386 Normally one would run 32b binaries on x86_64, but the other way around would require pushing the cpu into 64b mode, which AFAIK cannot be undone. Hope this question is clear enough..

    Read the article

  • openoffice document (odt) to PDF with commad line on Linux?

    - by Data-Base
    Hi, we are building a PHP script that we need at work to create reports in PDFs the reports will be created by using templates from postgrSQL. so far I found that it can be done with the use of php and odt (openoffice) files [http://www.odtphp.com/] (do you have any other suggestions?) now how I can convert the results to PDF so teachers will get the final reports as PDF any tips? the server has no GUI and I want to make it as simple as possible we tried using PHP to PDF directly with FPDF [http://www.fpdf.org/] but it is really a CPU killer! cheers

    Read the article

  • What is the cost of memory access?

    - by Jurily
    We like to think that a memory access is fast and constant, but on modern architectures/OSes, that's not necessarily true. Consider the following C code: int i = 34; int *p = &i; // do something that may or may not involve i and p {...} // 3 days later: *p = 643; What is the estimated cost of this last assignment in CPU instructions, if i is in L1 cache, i is in L2 cache, i is in L3 cache, i is in RAM proper, i is paged out to an SSD disk, i is paged out to a traditional disk? Where else can i be? Of course the numbers are not absolute, but I'm only interested in orders of magnitude. I tried searching the webs, but Google did not bless me this time.

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • How do I declare a C# Web User Control but stop it from initializing?

    - by Scott Stafford
    I have a C#/ASP.NET .aspx page that declares two controls that each represents the content of one tab. I want a query string argument (e.g., ?tab=1) to determine which of the two controls is activated. My problem is, they both go through the initialization events and populate their child controls, wasting CPU resources and slowing the response time. Is it possible to deactivate them somehow so they don't go through any initialization? My .aspx page looks like this: <% if (TabId == 0) { %> <my:usercontroltabone id="ctrl1" runat="server" /> <% } else if (TabId == 1) { %> <my:usercontroltabtwo id="ctrl2" runat="server" /> <% } %> And that part works fine. I assumed the that <%'s would have meant the control wouldn't actually be declared and so wouldn't initialize, but that isn't so...

    Read the article

  • Disable update on battery percentage

    - by Kris B
    I have a service that performs background updates. I want to give the user the the option to disable the updates when their battery percentage reaches a certain level. From my research, I'm going to use a receiver in the onCreate method of my Service class, eg: public class MainService extends Service { @Override public void onCreate() { this.registerReceiver(this.BatInfoReceiver, new IntentFilter(Intent.ACTION_BATTERY_CHANGED)); } private BroadcastReceiver BatInfoReceiver = new BroadcastReceiver(){ @Override public void onReceive(Context arg0, Intent intent) { int level = intent.getIntExtra("level", 0); } }; } I'm assuming the best practice is to leave the service running and check the battery level in the service and not perform the CPU intensive code based on the percentage? I don't actually stop the service itself and start it up again, based on the battery percentage?

    Read the article

  • Is there a way to determine the current Outlook activity level?

    - by dlittau
    I am working on an Outlook 2007 Add-in in C# (VS2008) and we want to send some background email items only when Outlook is not busy doing something else. Is there a .net way to see if Outlook is busy doing other things (perhaps due to other add-ins or the like taking up CPU cycles)? Alternately, is there a way to send emails such that they never appear in the Outbox? We need a method that would not require any additional software be installed. Thanks for any input.

    Read the article

  • C/C++ function definitions without assembly

    - by Jack
    Hi, I always thought that functions like printf() are in the last step defined using inline assembly. That deep into stdio.h is burried some asm code that actually tells CPU what to do. Something like in dos, first mov bagining of the string to some memory location or register and than call some int. But since x64 version of Visual Studio doesent support inline assembler at all, it made me think that there are really no assembler-defined functions in C/C++. So, please, how is for example printf() defined in C/C++ without using assembler code? What actually executes the right software interrupt? Thanks.

    Read the article

  • MPI_Bsend and MPI_Isend. How do they work ?

    - by GBBL
    Hi, using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread. Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result. Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ? Thanks.

    Read the article

  • CUDA Global Memory, Where is it?

    - by gamerx
    I understand that in CUDA's memory hierachy, we have things like shared memory, texture memory, constant memory, registers and of course the global memory which we allocate using cudaMalloc(). I've been searching through whatever documentations I can find but I have yet to come across any that explicitly explains what is the global memory. I believe that the global memory allocated is on the GDDR of graphics card itself and not the RAM that is shared with the CPU since one of the documentations did state that the pointer cannot be dereferenced by the host side. Am I right?

    Read the article

  • Through Java, make a call to Javascript functions on networked device?

    - by stjowa
    I am doing device monitoring on a networked system. I need to know how to make Javascript calls on that device via its IP address to get certain status information (this device's status is only available through Javascript APIs, not SNMP, etc). I am working in Java. ADDED: The specific device is an Amino set-top-box. It has what it calls JMACX: JavaScript Media Access Control Extensions API specification. It allows you within an HTML document to use that API to get MUCH information about the device (cpu usage, channel info, remote-control options, etc.). I need to get this information within a Java program for specific monitoring purposes. Perhaps possible with HTTP requests? Any input would be greatly appreciated. Thanks, Steve

    Read the article

  • Why use threading data race will occur, but will not use gevent

    - by onlytiancai
    My test code is as follows, using threading, count is not 5,000,000 , so there has been data race, but using gevent, count is 5,000,000, there was no data race . Is not gevent coroutine execution will atom "count + = 1", rather than split into a one CPU instruction to execute? # -*- coding: utf-8 -*- import threading use_gevent = True use_debug = False cycles_count = 100*10000 if use_gevent: from gevent import monkey monkey.patch_thread() count = 0 class Counter(threading.Thread): def __init__(self, name): self.thread_name = name super(Counter, self).__init__(name=name) def run(self): global count for i in xrange(cycles_count): if use_debug: print '%s:%s' % (self.thread_name, count) count = count + 1 counters = [Counter('thread:%s' % i) for i in range(5)] for counter in counters: counter.start() for counter in counters: counter.join() print 'count=%s' % count

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • SqlAlchemy hangs after adding record in MS SQL

    - by Patrick
    I'm running SQLAlchemy on Jython and trying to connect to a MS SQL database using jTDS with windows authentication. I can query and delete just fine but when I try to insert new values it will hang when I commit. int 'before add' session.add(newVal) print 'after add' session.commit() print 'after commit' I see the first two print statements but not the last. My CPU maxes out and I can't even query the table directly using the MS SQL Management Studio. When I kill the Jython java process I can query again but the new values haven't been added. Strangely enough I can insert values directly using an SQL command: insert_sql = "INSERT INTO my_table (my_value) VALUES ('test_value')" session.execute(insert_sql) session.commit() Any ideas what I'm doing wrong?

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >