Search Results

Search found 5628 results on 226 pages for 'cpu hogging'.

Page 169/226 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • HTML Audio performance

    - by user1888309
    I'm working on HTML drum machine, and I`ve met some performance issues, rhythm start to break if BPM is higher than 110 but I'm expecting to make it work on BPM over 180. I guess that it can be related with format or codec of audio files, however it also maybe that my code is not very optimised (as I can see from JS CPU profiling it's not). So I'm expecting you guys give me some code review or some hints on optimisation. Although all similar projects I've found on internet didn't work good and maybe it's just restrictions of Audio API. By the way, it's very raw and sounds works only on Chrome under Mac OS, so any advise on audio encoding for web also would be great Project on Github pages Screenshot of Groove which breaks UPDATE Ok, I've found that I was encoding audio files incorrectly, after fixing that rhythm stopped breaking, and also it started working in Mozilla. But still there are issues on windows OS.

    Read the article

  • Eclipse hangs when rebuilding after the addition of an external JAR file.

    - by celestialorb
    I'm fairly new to Eclipse so if this is something simple I apologize, however when I attempt to add an external JAR file to my build path (specifically the "rt.jar" file which contains certain tools that I require) and then rebuild my project, Eclipse will hang at the end of the Build process. It'll get to 100% then just hang there using 100% of one of my CPU cores. At first I thought it may have been due to the relatively large size of the rt.jar file, but I tried using smaller JAR files and it still hung at 100%. Any help would be greatly appreciated! If there is something wrong with using the rt.jar file does anyone know of another JAR file that contains both tools for dealing with SOAP requests as well as XML/DOM manipulation? Thanks again!

    Read the article

  • How to force programs out of swap file when a resources-intensive batch finishes?

    - by sharptooth
    We use employees' desktops for CPU-intensive simulation during the night. Desktops run Windows - usually Windows XP. Employees don't log off, they just lock the desktops, switch off their monitors and go. Every employee has a configuration file which he can edit to specify when he is most likely out of office. When that time comes a background program grabs data for simulation from the server, spawns worker processes, watches them, gets results and sends them to the server. When the time specified by the employee elapses simulation stops so that normal desktop usage is not interfered. The problem is that simuation consumes a lot of memory, so when the worker processes run they force other programs into the swap file. So when the employee comes all the programs he left are luggish and slow until he opens them one by one so that they are unswapped. Is there a way the program can force other programs out of swap file when it stops simulation so that they again run smoothly?

    Read the article

  • Uploadify and Image Compression

    - by Ilya Biryukov
    Hi, I am using Uploadify on one of my client's web sites to allow them to upload a large amount of pictures at once to their photo gallery. I am seeing issues lately. They seem to upload large photographs (3 MB and above). I am wondering, is it possible to compress (reduce their size) on the client side, instead of doing it on the server (just like facebook does it). I know I could easily do it on the server, but I am working on another project right now, where I am expecting a large flow of photo uploads. It would require significant amount of CPU time to process them all. So I thought, I'd ask about the client side processing. Thanks.

    Read the article

  • Considering getting into reverse engineering/disassembly

    - by Zombies
    Assuming a decent understanding of assembly on common CPU architectures (eg: x86), how can one explore a potential path (career, fun and profit, etc) into the field of reverse engineering? There is so little educational guides out there so it is difficult to understand what potential uses this has today (eg: is searching for buffer overflow exploits still common, or do stack monitoring programs make this obselete?). I am not looking for any step by step program, just some relevant information such as tips on how to efficiently find a specific area of a program. Basic things in the trade. As well as what it is currently being used for today. So to recap, what current uses does reverse engineering yield today? And how can one find some basic information on how to learn the trade (again it doesn't have to be step-by-step, just anything which can through a clue would be helpful).

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • Why doesn't infinite recursion hit a stack overflow exception in F#?

    - by Amazingant
    I know this is somewhat the reverse of the issue people are having when they ask about a stack overflow issue, but if I create a function and call it as follows, I never receive any errors, and the application simply grinds up a core of my CPU until I force-quit it: let rec recursionTest x = recursionTest x recursionTest 1 Of course I can change this out so it actually does something like this: let rec recursionTest (x: uint64) = recursionTest (x + 1UL) recursionTest 0UL This way I can occasionally put a breakpoint in my code and see the value of x is going up rather quickly, but it still doesn't complain. Does F# not mind infinite recursion?

    Read the article

  • Trying to right click on code in VS2008 causes lockup.

    - by Adam Haile
    Working on a Win32 DLL using Visual Studio 2008 SP1 and, since yesterday, whenever I try to right click on code, to go to a variable definition for example, VS completely locks up and I have to manually kill the process. To make it even weirder, whenever this happens the devenv.exe process uses exactly 25% of the CPU. And I mean exactly, never 24%, never 26%, always 25% Also, I've run ProcMon to see if devenv is actually doing something, but it's doing absolutely nothing external of the process. No disk, network, registry access. Nothing. This is getting really aggravating because I have a large code base to deal with and the only other way of jumping to the definition is to first search for it. Has anyone run into a similar issue? And, better yet, know a fix?

    Read the article

  • Does the number of busy worker threads in the CLR ThreadPool affect performance of I/O threads?

    - by andrej351
    We have a Windows Service which hosts a number of WCF services and, in an unrelated part of the app, makes extensive use of the TPL Task class to asynchronously do relatively short bits of work. It is my understanding that WCF uses managed I/O threads from the ThreadPool to execute requests. I noticed that after deploying a feature which significantly raised the applications use of Tasks, and as such the use of ThreadPool worker threads as well, performance of a couple of web services has become very slow. We're talking minutes instead of less than a second. The number of Tasks actually trying to run at any one time can range between 20 and 1000, which makes me think that any new (last in) work needing some CPU time could be forced to wait for quite some time. Does the (in my case extremely large) number of busy ThreadPool worker threads affect the ThreadPool's managed I/O threads? Or could these two be connected in any way? Thanks!

    Read the article

  • Gapless (looping) audio playback with DirectX in C#

    - by horsedrowner
    I'm currently using the following code (C#): private static void PlayLoop(string filename) { Audio player = new Audio(filename); player.Play(); while (player.Playing) { if (player.CurrentPosition >= player.Duration) { player.SeekCurrentPosition(0, SeekPositionFlags.AbsolutePositioning); } System.Threading.Thread.Sleep(100); } } This code works, and the file I'm playing is looping. But, obviously, there is a small gap between each playback. I tried reducing the Thread.Sleep it to 10 or 5, but the gap remains. I also tried removing it completely, but then the CPU usage raises to 100% and there's still a small gap. Is there any (simple) way to make playback in DirectX gapless? It's not a big deal since it's only a personal project, but if I'm doing something foolish or otherwise completely wrong, I'd love to know. Thanks in advance.

    Read the article

  • Notifying when screen is off

    - by Al
    I'm trying to generate a notification which vibrates the phone and plays a sound when the screen is off (cpu turned off). According to the Log messages, the notification is being sent, but the phone doesn't vibrate or play the sound until I turn the screen on again. I tried holding a 2 second temporary wakelock (PowerManager.PARTIAL_WAKE_LOCK), which I thought would be ample time for the notification to be played, but alas, it still doesn't. Any pointers to get the notification to run reliably? I'm testing this on an G1 running Android 1.6. Code I'm using: notif.vibrate = new long[] {100, 1000}; notif.defaults |= Notification.DEFAULT_SOUND; notif.ledARGB = Color.RED; notif.ledOnMS = 1; notif.ledOffMS = 0; notif.flags = Notification.FLAG_SHOW_LIGHTS; notif.flags |= NOTIF_FLAGS; //static var if (!screenOn) { //var which updates when screen turns off/on mWakeLock.acquire(2000); } manager.notify(NOTIF_ID, notif);

    Read the article

  • Untrusted GPGPU code (OpenCL etc) - is it safe? What risks?

    - by Grzegorz Wierzowiecki
    There are many approaches when it goes about running untrusted code on typical CPU : sandboxes, fake-roots, virtualization... What about untrusted code for GPGPU (OpenCL,cuda or already compiled one) ? Assuming that memory on graphics card is cleared before running such third-party untrusted code, are there any security risks? What kind of risks? Any way to prevent them ? (Possible sandboxing on gpgpu or other technique?) P.S. I am more interested in gpu binary code level security rather than hight-level gpgpu programming language security (But those solutions are welcome as well). What I mean is that references to gpu opcodes (a.k.a machine code) are welcome.

    Read the article

  • How to profile a silverlight application?

    - by rudigrobler
    Is their any profilers that support Silverlight? I have tried ANTS (Version 3.1) without any success? Does version 4 support it? Any other products I can try? Updated since the release of Silverlight 4, it is now possible to do full profiling on SL applications... check out this article on the topic At PDC, I announced that Silverlight 4 came with the new CoreCLR capability of being profile-able by the VS2010 profilers: this means that for the first time, we give you the power to profile the managed and native code (user or platform) used by a Silverlight application. woohoo. kudos to the CLR team. Sidenote: From silverlight 1-3, one could only use things like xperf (see XPerf: A CPU Sampler for Silverlight) which is very powerful to see the layout/text/media/gfx/etc pipelines, but only gives the native callstack.) From SilverLite (PDC video, TechEd Iceland, VS2010, profiling, Silverlight 4)

    Read the article

  • Single static HTML file: how can I serve it efficiently?

    - by Stevo M.
    Good afternoon. I have a domain and server that I've procured for non-HTML services. Should anyone stumble across port 80 of this server, I'd like to serve them a page explaining what the domain & server is for, instead of just 404'ing them. How can I serve one small, static, independent (i.e.: no images or CSS) HTML file, with a minimum of effort (meaning both a smooth setup for me, and minimum CPU expense). The server contains an untouched installation of ArchLinux, and I'm open to solutions in any language. (Note: I am a slight newbie when it comes to this, so forgive me if this question seems trivial or obvious.) Thank you.

    Read the article

  • Python: Taking an array and break it into subarrays based on some criteria

    - by randombits
    I have an array of files. I'd like to be able to break that array down into one array with multiple subarrays, each subarray contains files that were created on the same day. So right now if the array contains files from March 1 - March 31, I'd like to have an array with 31 subarrays (assuming there is at least 1 file for each day). In the long run, I'm trying to find the file from each day with the latest creation/modification time. If there is a way to bundle that into the iterations that are required above to save some CPU cycles, that would be even more ideal. Then I'd have one flat array with 31 files, one for each day, for the latest file created on each individual day.

    Read the article

  • I want to run the examples of QtOpenCl, i.e. Qt in OpenCl. Installation and setup help?

    - by Skkard
    The thing is I have to run the OpenCl examples, as given here:http://labs.trolltech.com/blogs/2010/04/07/using-opencl-with-qt/. The problem is that I have no clue where to start. I downloaded the source for QtOpenCl but it needs a valid OpenCl installation. I have Qt installed already. How do I install OpenCl? I don't have a GPU at home unfortunately, and need to implement it on my CPU for now. I have to later give a presentation where I will be supplied a system with a GPU. How do I go about installing OpenCl? Thanks you.

    Read the article

  • (Newbie) Amazon Web Services Apache Server

    - by Samnsparky
    Hello! I am trying to get a feel for the costs imposed by running apache on AWS continually. Assuming that the service is scarcely used, does anyone know how many cpu hours that would eat up in a month just by sitting there and running? I understand that this is slightly impractical but I am trying to figure out what the cost of entry is to deploy an application on this platform (as compared to GAE). I suspect it to be small but I would like to know. Thank you for your help, Sam

    Read the article

  • Performance Overhead of Perf Event Subsystem in Linux Kernel

    - by Bo Xiao
    Performance counters for Linux are a new kernel-based subsystem that provide a framework for all things performance analysis. It covers hardware level (CPU/PMU, Performance Monitoring Unit) features and software features (software counters, tracepoints) as well. Since 2.6.33, the kernel provide 'perf_event_create_kernel_counter' kernel api for developers to create kernel counter to collect system runtime information. What I concern most is the performance impact on overall system when tracepoint/ftrace is enabled. There are no docs I can find about them. I was once told that ftrace was implemented by dynamically patching code, will it slow the system dramatically?

    Read the article

  • How to create real-life robots?

    - by Click Upvote
    Even before I learnt programming I've been fascinated with how robots could work. Now I know how the underlying programming instructions would be written, but what I don't understand is how those intructions are followed by the robot. For example, if I wrote this code: object=Robot.ScanSurroundings(300,400); if (Objects.isEatable(object)) { Robot.moveLeftArm(300,400); Robot.pickObject(object); } How would this program be followed by the CPU in a way that would make the robot do the physical action of looking to the left, moving his arm, and such? Is it done primarily in binary language/ASM? Lastly, where would i go if I wanted to learn how to create a robot?

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Does Django tests run slower on the mac compared to linux?

    - by Thierry Lam
    I'm currently developing my Django projects on both: Mac OS X 10.5, 32 bit Ubuntu Server 9.10 64 bits (1 CPU, 512MB RAM) Both of the above OS are using: Python 2.6.4 Django 1.1.1 MySQL 5.1 Running 12 tests for one of my application take: Mac: 57.513s Linux: 30.935s EDIT: Mac Hardware Spec: MacBook Pro 2.2 GHz Intel Core 2 Duo 3GB RAM I'm running the Ubuntu OS on the same mac above through VMware Fusion 2.0.6. You might argue that Ubuntu Server 64 bits is faster but I have observed a similar speed difference on Ubuntu 8.10 32 bits desktop edition. Even if I turn off my linux VM and other mac applications, I still experience the slowness. Has anyone else experienced this Django test speed difference across those two OS?

    Read the article

  • How to create a glib.Source from Python?

    - by Matt Joiner
    I want to integrate some asyncore.dispatcher instances into GLib's default main context. I figure I can create a custom GSource that's able to detect event readiness on the various sockets in asyncore.socket_map. From C I believe this is done by creating the necessary GSourceFuncs which could involve cheap and non-blocking calls to select, and then handling them using asyncore.read, .write and friends. How do I actually create a GSource from Python? The class glib.Source is undocumented, and attempts to use the class interactively have been in vain. Is there some other method that allows me to handled socket events in the asyncore module without resorting to timeouts (or anything that endangers potential throughput and CPU usage)?

    Read the article

  • Have threads run indefinitely in a java application

    - by TP
    I am trying to program a game in which I have a Table class and each person sitting at the table is a separate thread. The game involves the people passing tokens around and then stopping when the party chime sounds. how do i program the run() method so that once I start the person threads, they do not die and are alive until the end of the game One solution that I tried was having a while (true) {} loop in the run() method but that increases my CPU utilization to around 60-70 percent. Is there a better method?

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >