Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 177/255 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • classic asp & .net 2 site not working on windows 7

    - by alexander2116
    I am receiving the following error message: An error occurred on the server when processing the URL. Please contact the system administrator. If you are the system administrator please click here to find out more about this error. I have my site in the inetpub directory in a subfolder called website. I have also gone to add/remove windows compononents and had asp installed. In iss manager I have asp listed with defult settings. The initial website page is a classic asp page Has anyone else encountered this issue? Please help! I'm having to develop through vpn/remote desktop combo which is painfully slow!! thanks so much for anyone who can help!

    Read the article

  • High level audio crossfading library for python

    - by tcoopman
    I am looking for a high level audio library that supports crossfading for python (and that works in linux). In fact crossfading a song and saving it is about the only thing I need. I tried pyechonest but I find it really slow. Working with multiple songs at the same time is hard on memory too (I tried to crossfade about 10 songs in one, but I got out of memory errors and my script was using 1.4Gb of memory). So now I'm looking for something else that works with python. I have no idea if there exists anything like that, if not, are there good command line tools for this, I could write a wrapper for the tool.

    Read the article

  • java: speed up reading foreign characters

    - by Yang
    My current code needs to read foreign characters from the web, currently my solution works but it is very slow, since it read char by char using InputStreamReader. Is there anyway to speed it up and also get the job done? // Pull content stream from response HttpEntity entity = response.getEntity(); InputStream inputStream = entity.getContent(); StringBuilder contents = new StringBuilder(); int ch; InputStreamReader isr = new InputStreamReader(inputStream, "gb2312"); // FileInputStream file = new InputStream(is); while( (ch = isr.read()) != -1) contents.append((char)ch); String encode = isr.getEncoding(); return contents.toString();

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • More efficient way to find & tar millions of files

    - by Stu Thompson
    I've got a job running on my server at the command line prompt for a two days now: find data/ -name filepattern-*2009* -exec tar uf 2008.tar {} ; It is taking forever, and then some. Yes, there are millions of files in the target directory. But just running... find data/ -name filepattern-*2009* -print > filesOfInterest.txt ...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks.. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script? A secondary questions is "why is my current approach so slow?"

    Read the article

  • Is it possible to use Sphinx search with dynamic conditions?

    - by Fedyashev Nikita
    In my web app I need to perform 3 types of searching on items table with the following conditions: items.is_public = 1 (use title field for indexing) - a lot of results can be retrieved(cardinality is much higher than in other cases) items.category_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results items.user_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results I can't find a way to make Sphinx work in all these cases, but it works well in 1st case. Should I use Sphinx just for the 1st case and use plain old "slow" FULLTEXT searching in MySQL(at least because of lower cardinality in 2-3 cases)? Or is it just me and Sphinx can do pretty much everything?

    Read the article

  • jQuery FadeIn, FadeOut Div - IE7 bug

    - by user1058223
    I have a div that will fade in and out on hover in FF, but in IE7 it just hides and shows with no animation. Here is my code: #nav-buttons { display:none; width:894px; position:relative; z-index:1000; } ---------- <div id="contents"> <div id="nav-buttons"> <a href="javascript:void(0)" id="left-button"></a> <a href="javascript:void(0)" id="right-button"></a> </div> other html.... </div> ---------- $(document).ready(function() { $("#contents").hover(function() { $("#nav-buttons").fadeToggle("slow"); }); });

    Read the article

  • How to detect Out Of Memory condition?

    - by Jaromir Hamala
    I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them) Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs. Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it. Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • BULK SMS, Long Codes (VMN MSIDN), T-mobile?

    - by John
    Does any US wireless carrier offer individuals or companies with a direct connection to the SMSC? The number is 747-772-3101 (repalce 7's with 6's) This number is registered to t-mobile, also verified by t-mobile to be a valid subscriber sending 160,000+ text messages monthly and that all they have is an unlimited text messaging plan on top of the cheapest voice plan. This company of the number verified to me that they don't use gsm modems as they are too slow. So I know it's possible but who would I contact, Sales or anyone else reachable through a 1-800 is ignorant to these services and developer.t-mobile is worthless and doesn't reply to emails. Any info??

    Read the article

  • Has Object in VB 2010 received the same optimalization as dynamic in C# 4.0?

    - by Abel
    Some people have argued that the C# 4.0 feature introduced with the dynamic keyword is the same as the "everything is an Object" feature of VB. However, any call on a dynamic variable will be translated into a delegate once and from then on, the delegate will be called. In VB, when using Object, no caching is applied and each call on a non-typed method involves a whole lot of under-the-hood reflection, sometimes totaling a whopping 400-fold performance penalty. Have the dynamic type delegate-optimization and caching also been added to the VB untyped method calls, or is VB's untyped Object still so slow?

    Read the article

  • Find gap between start and end dates for multiple data ranges with overlaps

    - by sqlint
    Need to find gap between start and end dates more than 20 days for multiple data ranges with overlaps. One Id has multiple start dates and end dates. Following Id 1 has two gaps less that 20 day. It should be considered as one range from 10/01/2012 to 10/30/2014 without any gap. 1 10/01/2012 02/01/2013 1 01/01/2013 01/31/2013 1 02/10/2013 03/31/2013 1 04/15/2013 10/30/2014 Id 2 has a gap more than 20 days between end date 01/30/2013 and start date 05/01/2013. It has to be captured. 2 01/01/2013 01/30/2013 2 05/01/2013 06/30/2014 2 07/01/2013 02/01/2014 Id 3 should be considered as one range from 01/01/2012 to 06/01/2014 without any gap. The gap between end date 02/28/2013 and start date 07/01/2013 should be ignored because range from 01/01/2012 to 01/01/2014 cavers a gap. 3 01/01/2012 01/01/2014 3 01/01/2013 02/28/2013 3 07/01/2013 06/01/2014 The cursor can do it but it works extremely slow and not acceptable. SQL fiddle http://sqlfiddle.com/#!3/27e3f/2/0

    Read the article

  • C# - Inserting multiple rows using a stored procedure

    - by user177883
    I have a list of objects, this list contains about 4 million objects. there is a stored proc that takes objects attributes as params , make some lookups and insert them into tables. what s the most efficient way to insert this 4 million objects to db? How i do : -- connect to sql - SQLConnection ... foreach(var item in listofobjects) { SQLCommand sc = ... // assign params sc.ExecuteQuery(); } THis has been really slow. is there a better way to do this?

    Read the article

  • iPhone webapp: my ressources don't get cached

    - by Savageman
    Hello, First of all, I'd like to say I'm not using any off-line feature from HTML5. I have a web-application which runs on the iPhone. When viewing it from safari, everything works quite well. But when I launch the application from the home screen (to remove the navigation bar), it can be really slow. I checked the logs in Apache and it appears that Safari does a good work to cache the resources (css / js / images), with Apache answering "304 Not Modified" when needed. However, when the web app run as a "real" application (navigation bar hidden), those resources doesn't get cached and Apache the content has to be transferred over and over again (response code 200 Ok + content), resulting in a significantly slower page load. How can I prevent this behavior? Do I need to always run my webapp inside Safari, even when it's launched from the home screen? Thank you!

    Read the article

  • git can I speed up committing?

    - by AndreasT
    I have a big repository in a shared folder. I use git from within a VM on that folder. Everything works nice, but the repository is big and git's searching through all directories and files when committing is slow. I cannot move this repository out of the shared folder. I tried to git add specific files and directories, but when I do git commit -m "something" it still goes off onto it's oddyssey through the directory tree. Can I do commits that ignore the rest of the tree?

    Read the article

  • low latency data link pc to android

    - by steveh
    can anyone recommend a method for low latency bi-directional com link between my pc app and android slave app. the app i have works now via wifi but the latency is too slow (about 300mS), i'm looking to get it down to 10mS or so. the android is acting like a glorified remote control to the game on the pc. the apk displays a low res image and sends button presses back to the game and the round trip need to be quick i'm thinking the only option beside the network, is to connect a usb cable but i don't see a lot of support for that path and not even sure it would be lower latency than wifi any ideas please?

    Read the article

  • Resetting AUTO_INCREMENT on myISAM without rebuilding the table

    - by Artem
    Please help I am in major trouble with our production database. I had accidentally inserted a key with a very large value into an autoincrement column, and now I can't seem to change this value without a huge rebuild time. "ALTER TABLE tracks_copy AUTO_INCREMENT = 661482981" Is super-slow. How can I fix this in production? I can't get this to work either (has no effect): myisamchk tracks.MYI --set-auto-increment=661482982 Any ideas? Basically, no matter what I do I get an overflow: SHOW CREATE TABLE tracks CREATE TABLE tracks ( ... ) ENGINE=MYISAM AUTO_INCREMENT=2147483648 DEFAULT CHARSET=latin1

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • How do I control script execution time in PHP

    - by mathew
    for example I do have 5 PHP functions on a page which execute when loading. each functions has its own processing time and some of them take more time sometimes to complete the task. hence the total loading time of the said page is slow. my question is how do I control execution time for each script and set time limit for the same. I am aware that there is an in built function in PHP called set_time_limit(); but it gives fatal error if time is beyond the maximum limit...

    Read the article

  • TextMate/Macfusion combo for mounting projects over SSH

    - by Sam Lee
    Here is my workflow: I use Macfusion to mount a server over SSH, and then edit the root directory of the project in TextMate (using mate /Volumes/server/projectdir). I have a plug in installed that disables refreshing on refresh. This works ALMOST perfectly--the only thing I have problems with is "Find in Project": it's REALLY slow. Has anyone run into this problem before and been able to find any solutions? Currently I go to terminal when I have to do a search, but it would be great to be able to do it in TextMate. Thanks!

    Read the article

  • Alternative to 'where col in (list)' for MySQL

    - by user210481
    Hi I have the following table T: id 1 2 3 4 col a b a c I want to do a select that returns the id,col when group by(col) having count(col)1 One way of doing it is SELECT id,col FROM T WHERE col IN (SELECT col FROM T GROUP BY(col) HAVING COUNT(col)>1); The intern select (from the right) returns 'a' and main one (left) will return 1,a and 3,a The problem is that the where in statement seems to be extremely slow. In my real case, the results from the internal select has many 'col's, something about 70000 and it's taking hours. Right now it's much faster to do the internal select and the main select getting all ids and upcs and do the intersection locally. MySQL should be able to handle this kind of query efficiently. Can I substitute the where in for a join or something faster? Thanks

    Read the article

  • Programming language for fast calculations with big integers

    - by sub
    I'm doing Project Euler problems at the moment and I can solve most of them using my own programming language which uses direct C++ integers (so they are bound to 2^32 on my machine). However, at times there are problems which require me to work with very high numbers, I can't do that with native integers. So I implemented a BigInt library in my language which unfortunately gets extremely slow at times. Is there a programming language suitable for very efficient handling of big numbers? I mean that I want to do the things I could do in other programming languages with it (variables, loops, etc.), but in a faster way. If you have got tips for workarounds of the 2^32 limit in my language/C++/other languages, please tell me too!

    Read the article

  • Optimising (My)SQL Query

    - by Simon
    I usually use ORM instead of SQL and I am slightly out of touch on the different JOINs... SELECT `order_invoice`.*, `client`.*, `order_product`.*, SUM(product.cost) as net FROM `order_invoice` LEFT JOIN `client` ON order_invoice.client_id = client.client_id LEFT JOIN `order_product` ON order_invoice.invoice_id = order_product.invoice_id LEFT JOIN `product` ON order_product.product_id = product.product_id WHERE (order_invoice.date_created >= '2009-01-01') AND (order_invoice.date_created <= '2009-02-01') GROUP BY `order_invoice`.`invoice_id` The tables/ columns are logically names... it's an shop type application... the query works... it's just very very slow... I use the Zend Framework and would usually use Zend_Db_Table_Row::find(Parent|Dependent)Row(set)('TableClass') but I have to make lots of joins and I thought it'll improve performance by doing it all in one query instead of hundreds... Can I improve the above query by using more appropriate JOINs or a different implementation? Many thanks.

    Read the article

  • One big call vs. multiple smaller TSQL calls

    - by BrokeMyLegBiking
    I have a ADO.NET/TSQL performance question. We have two options in our application: 1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database. 2) Multiple small database calls. There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >