Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 224/283 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • List of phones that will work with Eclipse?

    - by user1058647
    I need an android phone to test my apps with that will work with Eclipse. It has to be low cost, run Gingerbread with modest memory and CPU. Thinking that any android phone would work I recently purchased a Virgin Mobil Chaser but as it turns out, it cannot be seen by either Eclipse or adb (but device manager does see the phone). Another developer has also had the same identical problem with the Chaser. I could keep buying phones and see if they work but that could be long and frustrating. I hope to find a "no contract" phone. Is there any list of phones that work with Eclipse. Does anyone know of any other Virgin Mobil phones that will work? thanks, Gary

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • Very Different Execution Times of SQL Query in C# and SQL Server Management Studio

    - by Paul
    I have a simple SQL query that when run from C# takes over 30 seconds then times-out every time, whereas when run on SQL Server Management Studio successfully completes instantly. In the latter case, a query execution plan reveals nothing troubling, and the execution time is spread nicely through a few simple operations. I've run 'EXEC sp_who2' while the query is running from C#, and it is listed as taking 29,000 milliseconds of CPU time, and is not blocked by anything. I have no idea how to begin solving this. Does anyone have some insight? The query is: SELECT c.lngId, ... FROM tblCase c INNER JOIN tblCaseStatus s ON s.lngId = c.lngId INNER JOIN tblCaseStatusType t ON t.lngId = s.lngId INNER JOIN [Another Database]..tblCompany cm ON cm.lngId = cs.lngCompanyId WHERE t.lngId = 25 AND c.IsDeleted = 0 AND s.lngStatus = 1

    Read the article

  • Launch x64 Windows application in C# while the project is set to x86

    - by daydr3am3r
    Hi all I'm trying to launch the osk.exe and I keep getting "Could not start osk" message. The problem is that my project is set to x86 (i'm using a ms access database). If I switch to x64 or Any CPU everything works fine but the database will no longer work. I tried this using System.Diagnostics; private void btnOSK_Click(object sender, EventArgs e) { Process.Start("osk.exe"); Process.Start(@"C:\windows\system32\osk.exe"); } I also tried to run SysWOWW\osk but this also didn't work. Besides my application should run on both x86 and x64 machines. Is there any way to bypass this? It's really frustrating.

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • Simple "Hello World!" console application crashes when run by windows TaskScheduler (1.0)

    - by user326627
    I have a batch file which starts multiple instances of simple console application (Hello World!). I work on Windows server 2008 64-bit. I configure it to run in TaskScheduler, at startup, and whether user is logged-in or not. The later configuration means that the instances will run without GUI (i.e. - no window). When I run this task, some of the instances just fail, after consuming 100& CPU. Application event-log shows the following error: "Faulting module KERNEL32.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000142, fault offset 0x00000000000b8fb8, process id 0x29bc, application start time 0x01cae17d94a61895." Running the batch file directly works just fine. It seems to me that the OS has a problem loading too many instances of the application when no window is displayed. However - I can’t figure out why... Any idea??

    Read the article

  • How to implement HeightMap smoothing using Thrust

    - by igal k
    I'm trying to implement height map smoothing using Thrust. Let's say I have a big array ( around 1 million floats). A known graphics algorithm to implement the above problem is to calculate the average around a given cell. If for example I need to calculate the value at a given cell[i,j] what I will basically do is: cell[i,j] = cell[i-1,j-1] + cell[i-1,j] + cell[i-1,j+1] + cell[i,j-1] + cell[i,j+1] + cell[i+1,j -1] + cell[i+1,j] + cell[i+1,j+1] cell[i,j] /=9 That's the CPU code. Is there a way to implement it using thrust? I know that I could use the transform algorithm. But I'm not sure it's correct to access different cells which are occupied but different threads (banks conflicts and so on).

    Read the article

  • Global variable not stable after platform changed.

    - by user350555
    Our embedded system is built on a hw/sw platform made by enea. After the platform updated recently, we found some operations on the global variable keep crashing the system. For example, we have a global map structure holding some data. We can insert/iterate the map once or twice, then the address of the elements in the map suddenly changed to some forbidden addresses like 0x0 or 0x1d, the system just crash. The only different before/after the platform update is : 1) sw part: It's a c++ software and We changed the compiler from diab cc to gcc. 2) hw part: we have a new board, but the cpu is still powerpc405s. I tried every possible way but still can't figure out the reason. Any thoughts?

    Read the article

  • OS monitoring using JAVA

    - by Puneri
    I'm planning to implement a framework for monitoring OS level resources: process network stats cpu info etc using JAVA. I see there is SIGAR API by Spring, which is implemented in native language and JAVA API being provided on top. But I will prefer not to have native stuff in my framework, rather for each OS will write a Java Class which will fetch required OS info by running system commands via JAVA Runtime. So I would like to have inputs/suggestions that one may have seen of not doing this in JAVA and use native app/api/jni. Any example will help for sure. I agree each OS has different commands to get these stats, but will prefer to have a Java Class per OS than have/load native code.

    Read the article

  • Amazon EC2 prices for Windows Instance?

    - by Abhishek Gupta
    Hello Guys , I want to ask from some Amazon cloud technology Experts , that is it profitable to deploy our web application on amazon cloud as compared to normal server? Currently there are micro,small, large and other types of instances available , if we start from micro instance then we realize that our app needs some more CPU cycle and Ram then how can we dynamically move to next more powerful instance automatically at runtime. What is the approx minimum yearly cost for a single EC2 windows small instance? I wnat to deploy a simple Online quiz application (ASP.net based) on Amazon Cloud which at a time can have maximum of 500 users only. Please suggest me as I m very new to Cloud .Should I go for Azure or Amazon?

    Read the article

  • Why There is a difference between assembly languages like Windows, Linux ?

    - by mcaaltuntas
    I am relatively new to all this low level stuff,assembly language.. and want to learn more detail. Why there is a difference between Linux, Windows Assembly languages? As I understand when I compile a C code Operating system does not really produce pure machine or assembly code, it produces OS dependent binary code.But why ? For example when I use a x86 system, CPU only understands x86 ASM am I right?.So Why we dont write pure x86 assembly code and why there are different assembly variations based on Operating system? If we would write pure ASM or OS produce pure ASM there wouldn't be binary compatilibty issues between Operating systems or Not ? I am really wondering all reasons behind them. Any detailed answer, article, book would be great. Thanks.

    Read the article

  • What strategies are efficient to handle concurrent reads on heterogeneous multi-core architectures?

    - by fabrizioM
    I am tackling the challenge of using both the capabilities of a 8 core machine and a high-end GPU (Tesla 10). I have one big input file, one thread for each core, and one for the the GPU handling. The Gpu thread, to be efficient, needs a big number of lines from the input, while the Cpu thread needs only one line to proceed (storing multiple lines in a temp buffer was slower). The file doesn't need to be read sequentially. I am using boost. My strategy is to have a mutex on the input stream and each thread locks - unlocks it. This is not optimal because the gpu thread should have a higher precedence when locking the mutex, being the fastest and the most demanding one. I can come up with different solutions but before rush into implementation I would like to have some guidelines. What approach do you use / recommend ?

    Read the article

  • Modifying an image with OpenGL ?

    - by chmike
    I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely. I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way. I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ? Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.

    Read the article

  • "Single NSMutableArray" vs. "Multiple C-arrays" --Which is more Efficient/Practical?

    - by RexOnRoids
    Situation: I have a DAY structure. The DAY structure has three variables or attributes: a Date (NSString*), a Temperature (float), and a Rainfall (float). Problem: I will be iterating through an array of about 5000 DAY structures and graphing a portion of these onto the screen using OpenGL. Question: As far as drawing performance, which is better? I could simply create an NSMutableArray of DAY structures (NSObjects) and iterate on the array on each draw call -- which I think would be hard on the CPU. Or, I could instead manually manage three different C-Arrays -- One for the Date String (2-Dimensional), One for the temperature (1-Dimensional) and One for the Rainfall (1-Dimensional). I could keep track of the current Day by referencing the current index of the iterated C-Arrays.

    Read the article

  • write() in sys/uio.h returns -1

    - by fredrik
    I'm using Ubuntu Server 9.10 AMD Phenom 2 cpu g++ (Ubuntu 4.4.1-4ubuntu9) 4.4.1 trying to run the application pftp-shit v 1.11. The following code in tcp.cc is executed successfully: int outfile_fd = open(name, O_CREAT | O_TRUNC | O_RDWR | O_BINARY)) which returns a file descriptor int (in my case 6) - name is a char array containing a valid path to my file which successfully i created. and successfully running: fchmod(outfile_fd, S_IRUSR | S_IWUSR); and access(name, W_OK) The issue occurs during running the function (from sys/uio.h) write(outfile_fd, this-control_buffer, read_length) which returns -1. -1 is of returned if nothing was written and otherwise a non-negative integer is returned which is equal to the number of bytes written. Anyone having a clue how I can get the write function to work?

    Read the article

  • Efficiently remove points with same slope

    - by Ram
    Hi, In one of mine applications I am dealing with graphics objects. I am using open source GPC library to clip/merge two shapes. To improve accuracy I am sampling (adding multiple points between two edges) existing shapes. But before displaying back the merged shape I need to remove all the points between two edges. But I am not able to find an efficient algorithm that will remove all points between two edges which has same slope with minimum CPU utilization. Currently all points are of type PointF Any pointer on this will be a great help.

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • Darwin kernel architecture and OS X, 64bit on 32bit kernel, how does this work?

    - by overscore
    The OS X Lion (10.7) OS runs on mostly 64-bit binaries as reported by Activity Monitor. Given this, and the fact that my laptop runs a 32-bit version of the EFI and thus also a 32-bit kernel, how does the arch mixing work in general? Darwin Kernel Version 11.3.0: Thu Jan 12 18:48:32 PST 2012; root:xnu-1699.24.23~1/RELEASE_I386 Normally one would run 32b binaries on x86_64, but the other way around would require pushing the cpu into 64b mode, which AFAIK cannot be undone. Hope this question is clear enough..

    Read the article

  • Which would be better? Storing/access data in a local text file, or in a database?

    - by TerranRich
    Basically, I'm still working on a puzzle-related website (micro-site really), and I'm making a tool that lets you input a word pattern (e.g. "r??n") and get all the matching words (in this case: rain, rein, ruin, etc.). Should I store the words in local text files (such as words5.txt, which would have a return-delimited list of 5-letter words), or in a database (such as the table Words5, which would again store 5-letter words)? I'm looking at the problem in terms of data retrieval speeds and CPU server load. I could definitely try it both ways and record the times taken for several runs with both methods, but I'd rather hear it from people who might have had experience with this. Which method is generally better overall?

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • varnish invalidate url REGEX from backend

    - by ooouuiii
    Say I have some highly-visited front-page, which displays number of some items by categories. When some item is added / deleted I need to invalidate this front-page/url and some 2 others. What is the best practice how to invalidate those urls from backend in Varnish (4.x)? From what I captured, I can: implement my HTTP PURGE handler in VCL configuration file, that "bans" urls matching received regex from backend to Varnish, send 3x HTTP PURGE requests for those 3 urls. But is this approach safe for this automatic usage? Basicly I need to invalidate some views everytime some related entity is inserted/updated/deleted. Can it lead to ban list cumulation and increasing CPU consumption? Is there any other approach? Thanks.

    Read the article

  • VS 2008 and F# Feb CTP - Can't debug

    - by Steve
    I have downloaded the VS2008 integrated shell, and the F# Feb CTP and I have the F# environment working just fine. The problem comes when I try to debug. Nothing happens at all. The output window says ------ Build started: Project: ConsoleApplication1, Configuration: Debug Any CPU ------ ========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ========== and none of my breakpoints are hit. My "program" is as simple as can be #light open System printfn "Hello World" Console.ReadKey(true) with breakpoints on the printfn and Console lines. The things I've read seem to suggest that debugging would work with this setup, and there is a debugger folder under common7/packages with files in it. Thanks for any help. EDIT: I'm on Win7 64 bit

    Read the article

  • All things equal what is the fastest way to output data to disk in C++?

    - by user260197
    I am running simulation code that is largely bound by CPU speed. I am not interested in pushing data in/out to a user interface, simply saving it to disk as it is computed. What would be the fastest solution that would reduce overhead? iostreams? printf? I have previously read that printf is faster. Will this depend on my code and is it impossible to get an answer without profiling? Edit: Output data needs to be in text format, whether tab or comma separated. This will require formatting, precision, etc. Running in Windows.

    Read the article

  • What's your development setup? (Talking right now to my boss)

    - by Flinkman
    How do I tell my boss, that I need endless cpu power to automate my daily job? By the way, what's your setup, now in sep, 2008. How fast disks? How much memory? How many cores? How big screen? (Ok, what the hell are you doing, you may ask. I'm working in multiple environments, vmware. Have couple of build-systems running, for compatibility tests. These build systems are automated. The setup of the build system is also. Is there an another way?) Thanks!

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >