Search Results

Search found 8647 results on 346 pages for 'intel cpu'.

Page 273/346 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Cocoa Screensaver Framework error message

    - by Veljko Skarich
    Hi, I'm trying to make a screen saver using the cocoa screensaver framework. The project builds fine and generates the .saver file, but when I try to run it in the preferences test window, it displays the error message: "You cannot use the screen saver with this version of Mac OSX. Please contact the vendor to get a newer version of the screen saver" I have the xcode settings to Release | x86_64, and I am running OSX 10.6.6 on a 2.4 GHz Intel Core i5 Macbook Pro. I've searched around online, and most of the solutions to this error message seem addressed to making sure the build is 64-bit, which the x86_64 setting should indeed take care of. I am trying to play a QT movie in the screensaver, if that makes any difference. I am at a loss, any help would be appreciated. Thank you.

    Read the article

  • running Hadoop software on office computers (when they are idle)

    - by Shahbaz
    Is there a project which helps setup a Hadoop cluster on office desktops, when they are idle? I'd like to experiment with Hadoop/MR/hbase but don't have acces to 5-10 computers. The computers at work are idle after hours and are connected to each other through a very high speed connection. What's more, data on these computers stays within our network so there is no privacy issue. In order for this to work I need a fairly light weight monitor running on each machine. When the computer has been idle for X hours, it will join the cluster. If the user logs on, it has to drop out of the cluster and return all CPU/memory back. Does something like this exist?

    Read the article

  • Simple "Hello World!" console application crashes when run by windows TaskScheduler (1.0)

    - by user326627
    I have a batch file which starts multiple instances of simple console application (Hello World!). I work on Windows server 2008 64-bit. I configure it to run in TaskScheduler, at startup, and whether user is logged-in or not. The later configuration means that the instances will run without GUI (i.e. - no window). When I run this task, some of the instances just fail, after consuming 100& CPU. Application event-log shows the following error: "Faulting module KERNEL32.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000142, fault offset 0x00000000000b8fb8, process id 0x29bc, application start time 0x01cae17d94a61895." Running the batch file directly works just fine. It seems to me that the OS has a problem loading too many instances of the application when no window is displayed. However - I can’t figure out why... Any idea??

    Read the article

  • Progressbar: Force element.innerHTML update before javascript sort call

    - by maras
    Hi, what is the best practice for this scenario: 1) User clicks "Sort huge javascript array" 2) Browser shows "Sorting..." through element.innerHTML="Sorting" 3) Browser sorts huge javascript array (100% CPU for several seconds) while displaying "Sorting..." message. 4) Browser shows result. Pseudo code: ... <a href="#" onclick="sortHugeArray();return false">Sort huge array</a> ... function sortHugeArray(){ document.getElementById("progress").innerHTML="Sorting..."; ...do huge sort ... ...render result... document.getElementById("progress").innerHTML=result; } When i do that this way, browser never shows "Sorting...", it freezes browser for several seconds and shows result without noticing user... Thank you for advice.

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Book resources for x86/x64 assembly programming on Win platform

    - by Scott Davies
    Hello, I ran a search for assembly language resources on stackoverflow.com and found some interesting results, but they seemed to boil down to two groups: 1) Assembly references to old ia32 architecture, such as the 80386 to Pentium 2) Windows agnostic books. Most of the commenters make the point that assembler is CPU dependent and that the OS is irrelevant, but it seems pointless to me to pick a book that has assembly examples that refer to MS-DOS interrupts and memory layouts. Likewise, learning assembler on Linux would seem to produce Linux executables Are there any: 1) Modern 2) x86/x64 3) on Windows platform - book resources available ? The reason I am targeting the Win platform is I would like to do low-level, OS internals programming, to supplement my Win C/C++ work. Thanks

    Read the article

  • Launch x64 Windows application in C# while the project is set to x86

    - by daydr3am3r
    Hi all I'm trying to launch the osk.exe and I keep getting "Could not start osk" message. The problem is that my project is set to x86 (i'm using a ms access database). If I switch to x64 or Any CPU everything works fine but the database will no longer work. I tried this using System.Diagnostics; private void btnOSK_Click(object sender, EventArgs e) { Process.Start("osk.exe"); Process.Start(@"C:\windows\system32\osk.exe"); } I also tried to run SysWOWW\osk but this also didn't work. Besides my application should run on both x86 and x64 machines. Is there any way to bypass this? It's really frustrating.

    Read the article

  • Howto read system informations in C++ on windows and linux?

    - by f4
    I need to read system information like cpu/ram/disks usage in C++. Maybe swap, network and process too but that's less important. It has probably been done thousand of times before so I first tried to search for a library. Someone here suggested SIGAR, which seems to fit my needs but is GPL and it is for inclusion in a proprietary product. So it's not an option here. I feel like it's something not that easy to implement, as it'll need testing on several platforms. So a library would be welcome. If you don't know of any library, could you point me in the right direction for both platforms?

    Read the article

  • Running my web site in a 32-bit application pool on a 64-bit OS.

    - by Jeremy H
    Here is my setup: Dev: - Windows Server 2008 64-bit - Visual Studio 2008 - Solution with 3 class libraries, 1 web application Staging Web Server: - Windows Server 2008 R2 64-bit - IIS7.5 Integrated Application Pool with 32-bit Applications Enabled In Visual Studio I have set all 4 of my projects to compile to 'Any CPU' but when I run this web application on the web server with the 32-bit application pool it times out and crashes. When I run the application pool in 64-bit mode it works fine. The production web server requires me to run 32-bit application pool in 64-bit OS which is why I have this configured in this way on the staging web server. (I considered posting on ServerFault but the server part seems to be working fine. It is my code specifically that doesn't seem to want to run in 32-bit application pool which is why I am posting here.)

    Read the article

  • Single static HTML file: how can I serve it efficiently?

    - by Stevo M.
    Good afternoon. I have a domain and server that I've procured for non-HTML services. Should anyone stumble across port 80 of this server, I'd like to serve them a page explaining what the domain & server is for, instead of just 404'ing them. How can I serve one small, static, independent (i.e.: no images or CSS) HTML file, with a minimum of effort (meaning both a smooth setup for me, and minimum CPU expense). The server contains an untouched installation of ArchLinux, and I'm open to solutions in any language. (Note: I am a slight newbie when it comes to this, so forgive me if this question seems trivial or obvious.) Thank you.

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Can EPD Python and MacPorts Python coexist on OS X (matplotlib)?

    - by bjoern
    I've been using MacPorts Python 2.6 on OS X 10.6. I am considering also installing the Enthought Python Distribution (EPD) on the same machine because it comes preconfigured with matplotlib and other nice data analysis and visualization packages. Can the two Python distributions co-exist peacefully on the same machine? What potential problems will I have to look out for (e.g., environment variables)? I know that building matplotlib through MacPorts is an option, but the process is lengthy (on the order of a full day) and there are open questions about compiling some dependencies on 64bit Intel. I would like to know about the tradeoffs before committing to one of the two approaches.

    Read the article

  • How to create a glib.Source from Python?

    - by Matt Joiner
    I want to integrate some asyncore.dispatcher instances into GLib's default main context. I figure I can create a custom GSource that's able to detect event readiness on the various sockets in asyncore.socket_map. From C I believe this is done by creating the necessary GSourceFuncs which could involve cheap and non-blocking calls to select, and then handling them using asyncore.read, .write and friends. How do I actually create a GSource from Python? The class glib.Source is undocumented, and attempts to use the class interactively have been in vain. Is there some other method that allows me to handled socket events in the asyncore module without resorting to timeouts (or anything that endangers potential throughput and CPU usage)?

    Read the article

  • Compilation hangs for a class with field double d = 2.2250738585072012e-308

    - by 01es
    I have come across an interesting situation. A coworker committed some changes, which would not compile on my machine neither from the IDE (Eclipse) nor from a command line (Maven). The problem manifested in the compilation process taking 100% CPU and only killing the process would help to stop it. After some analysis the cause of the problem was located and resolved. It turned out be a line "double d = 2.2250738585072012e-308" (without semicolon at the end) in one of the interfaces. The following snipped duplicates it. public class WeirdCompilationIssue { double d = 2.2250738585072012e-308 } Why would compiler hang? A language edge case?

    Read the article

  • Why doesn't infinite recursion hit a stack overflow exception in F#?

    - by Amazingant
    I know this is somewhat the reverse of the issue people are having when they ask about a stack overflow issue, but if I create a function and call it as follows, I never receive any errors, and the application simply grinds up a core of my CPU until I force-quit it: let rec recursionTest x = recursionTest x recursionTest 1 Of course I can change this out so it actually does something like this: let rec recursionTest (x: uint64) = recursionTest (x + 1UL) recursionTest 0UL This way I can occasionally put a breakpoint in my code and see the value of x is going up rather quickly, but it still doesn't complain. Does F# not mind infinite recursion?

    Read the article

  • Monitoring .NET ASP.NET Applications

    - by James Hollingworth
    I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are: Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc) Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions. Performance: Request times, memory usage, cpu, etc. whatever stats I can get I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?

    Read the article

  • How Two Programs Can Talk To Each Other In Java?

    - by Arnon
    My first time here... I want to ?reduce? the CPU usage/ROM usage/RAM usage - in general ?speaking?, all system resources that my App use - how doesn't? :) For this reason i want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which mean, to call the update method(that i wrote) that found in the Main class. How can i call the update method in the Main program from the preferences program? Or in the other hand... There is a way to build preferences window that take system resources just when it's appear? Is this approach - of separating programs and let them talk to each other(somehow) - is a right approach for speeding up my programs? tnx

    Read the article

  • Load balancing and scheduling algorithms.

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs); I can predict how long approximately each job will take to be caclulated. Also, I have priorities. My question is how to keep all machines loaded 99-100% and schedule the jobs in the best way. Each machine can do several calculations at a time. Jobs are pushed to the machine. The central machine knows the current load of each machine. Also, I would like to to assign some kind of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How can I distribute jobs (calculations) in the best possible way, keeping in mind the priorities? Any suggestions, ideas, or algorithms ? FYI: My platform .NET.

    Read the article

  • Unwanted SQLite inserted in \bin

    - by wilk
    I am using Visual Studio 2010 and using web deployment to promote the .Net MVC site to specific environments. I installed Elmah, and it worked great on my DEV environment, but when I pushed TEST, I got exceptions because SQLite was not a good format. I am not using SQLite in Elmah or otherwise that I know of. I have removed all visible refernces to SQLite, and I have removed the .dll from all configuration bin directories. But it still gets inserted with each build. I realize the exception problem is that SQLite cannot be built for CPU Any, and my environments vary from x86 to x64. But I would prefer SQLite to not even be present. I have since uninstalled Elmah, and SQLite is still inserted into the \bin directory. I have now re-installed Elmah, and I manually delete the SQLite.dll from \bin after each build. How can I determine what is causing SQLite to be inserted into my \bin after each build?

    Read the article

  • Linux binary built for 2.0 kernel wouldn't execute on 2.6.x kernel.

    - by lorin
    I was installing a binary Linux application on Ubuntu 9.10 x86_64. The app shipped with an old version of gzip (1.2.4), that was compiled for a much older kernel: $ file gzip gzip: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.0.0, stripped I wasn't able to execute this program. If I tried, this happened: $ ./gzip -bash: ./gzip: No such file or directory ldd was similarly unhappy with this binary: $ ldd gzip not a dynamic executable This isn't a showstopper for me, since my installation has a working version of gzip I can use. But I'm curious: What's the most likely source of this problem? A corrupted file? Or a binary incompatibility due to being built for a much older {kernel,libc,...}?

    Read the article

  • Parallelize Bash Script

    - by thelsdj
    Lets say I have a loop in bash: for foo in `some-command` do do-something $foo done do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once. The naive approach seems to be: for foo in `some-command` do do-something $foo & done This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished. How would you write this loop so there are always X do-somethings running at once?

    Read the article

  • Inheriting System.Windows.Controls.Panel

    - by modosansreves
    I use the Panel to provide custom layout of UIElements. For this, I override MeasureOverride and ArrangeOverride. In ArrangeOverride proper locations are given to Children. In my application, there is a bunch of rather heavy (by number of visuals) children inside the panel, but only 2 of them are located inside the visible region at a time. Working with my application I feel like all the visuals are drown. I need to introduce a kind of 'virtualization' and make the children render (or take CPU cycles) selectively. How do I?

    Read the article

  • OpenGL - lighting of vertices outside clip range

    - by hmp
    I have a problem with lighting in my OpenGL application. When one of the vertices of a drawn polygon goes outside the front clip plane (or has z<0, I'm not sure which), the polygon stops being lighted properly. This however happens on only one machine I tested, with Intel GMA950 card. On nVidia and ATI cards everything looks fine. I guess I am breaking some OpenGL rule here? How should I deal with it? I'd try dividing the scene into smaller polygons, but I'm not sure if it guarantees the case is eliminated (all polygons stepping outside the clipping range are offscreen).

    Read the article

  • Parse raw HTTP Headers

    - by Cev
    I have a string of raw HTTP and I would like to represent the fields in an object. Is there any way to parse the individual headers from an HTTP string? 'GET /search?sourceid=chrome&ie=UTF-8&q=ergterst HTTP/1.1\r\nHost: www.google.com\r\nConnection: keep-alive\r\nAccept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nUser-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.45 Safari/534.13\r\nAccept-Encoding: gzip,deflate,sdch\r\nAvail-Dictionary: GeNLY2f-\r\nAccept-Language: en-US,en;q=0.8\r\n [...]'

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >