Search Results

Search found 6612 results on 265 pages for 'seconds'.

Page 48/265 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • python threading and performace?

    - by kumar
    I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads) for file in file_list: t=threading.Thread(target = self.convertfile,args = file) t.start() ts.append(t) for t in ts: t.join() But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

    Read the article

  • How do I decrease first load time in ASP.net MVC 2?

    - by Bill
    I have an ASP.net MVC 2 application that runs well locally. However when I move the files to my production server, I get a first time lag of about 30 seconds, I assume this is a first compile. After that the application works fine. Then after about 20-30 minutes of non use, the applications takes another 30 seconds or so to load. I did try to precompile the code, but there is still a lag during the first load. Are there any trick to getting the application to work faster on the first load? I am using ASP.net 3.5, IIS 6 , visual studio 2010, MVC 2. Thanks

    Read the article

  • Back button causes iFrame to delay window.onLoad event

    - by JoJo
    I serve ads through an iFrame. The ad network's servers are much slower than mine, so I asyncronously load the iFrame after the window.onLoad event. Event.observe( window, 'load', function() { $('ad').writeAttribute('ad.html'); } ); A problem occurs when you enter the site via the browser's back button. Unexpectedly, the ad iFrame attempts to load immediately, delaying window.onLoad for a few seconds. During these few seconds, the site is unusable because I do a bunch of initialization after window.onLoad. As far as I know, this only happens in Firefox. How do I prevent this blocking load?

    Read the article

  • AlarmManager doesn't start the alarm on time

    - by user988635
    I try to use AlarmManager to setup an alarm that happens after some seconds from now. Here is my code: AlarmManager am = (AlarmManager)context.getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(ALARM_ACTION); PendingIntent alarmIntent = PendingIntent.getBroadcast(getBaseContext(), 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); Calendar rightNow = Calendar.getInstance(); rightNow.add(Calendar.SECOND, NumOfSecond); am.set(AlarmManager.RTC, rightNow.getTimeInMillis(), alarmIntent); For example, if rightNow is 8:00AM and I hope my alarm happens after 14400 seconds, that is 12:00PM, so NumOfSecond will be set as 14400. But when the code runs what happens is the alarm not always happens exactly at 12:00PM, sometimes it will be delayed by 1 or 2 minutes, or even 5 minutes! Does any one know what the heck is happened here?

    Read the article

  • Objective-c Method to get a number then countdown in 1 every second

    - by Sami
    Hi, i need a little help i have a method which gets value such as 50, it then assigns that value to trackDuration, so NSNumber *trackDuration = 50, i want the method to every second minus 1 from the value of trackDuration and update a label, the label being called duration. Here's what i have so far; - (void) countDown { iTunesApplication *iTunes = [SBApplication applicationWithBundleIdentifier:@"com.apple.iTunes"]; NSNumber *trackDuration = [NSNumber numberWithDouble:[[iTunes currentTrack] duration]]; while (trackDuration > 0) { trackDuration - 1; int inputSeconds = [trackDuration intValue]; int hours = inputSeconds / 3600; int minutes = ( inputSeconds - hours * 3600 ) / 60; int seconds = inputSeconds - hours * 3600 - minutes * 60; NSString *trackDurationString = [NSString stringWithFormat:@"%.2d:%.2d:%.2d", hours, minutes, seconds]; [duration setStringValue:trackDurationString]; sleep(1); }} Any help would be much appreciated, thanks in advanced, Sami.

    Read the article

  • Python - question regarding the concurrent use of `multiprocess`.

    - by orokusaki
    I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds). My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare? Thanks

    Read the article

  • Keep an ASP.NET IIS website responsive when time between visits is long

    - by Abel
    After years of ASP.NET development I'm actually quite surprised that I can't seem to find a satisfying solution for this. Why does an IIS ASP.NET site always seem to fall asleep (for 2-6 seconds) after a certain time of inactivity (after several hours), during which no HTTP response is sent from server to client. This happens on any type of site, one page or many, db or not, regardless the settings. How can I fix this? During the wait time, the server is not busy and there are no high peaks or (.NET) memory shortages. My guess is, it has to do with Windows moving the IIS process to the background and its memory to the page file, but I'm not sure. Anybody any idea? EDIT: one solution is to send some HTTP request once an hour or so, but I hope for something more constructive. EDIT: what I meant is: after hours of inactivity, it pauses several seconds on any new HTTP request.

    Read the article

  • Can I prevent Flash's Input Events from stacking up when my framerate low?

    - by Matt W
    My Flash game targets 24 fps, but slows to 10 on slower machines. This is fine, except Flash decides to throttle the queue of incoming MouseEvent and KeyboardEvents, and they stack up and the Events fall behind. Way behind. It's so bad that, at 10 fps, if I spam the Mouse and Keyboard for a few seconds not much happens, then, after I stop, the game seems to play itself for the next 5 seconds as the Events trickle in. Spooky, I know. Does anyone know a way around this? I basically need to say to Flash, "I know you think we're falling behind, but throttling the input events won't help. Give them to me as soon as you get them, please."

    Read the article

  • jQuery timer, ajax, and "nice time"

    - by Mil
    So for this is what I've got: $(document).ready(function () { $("#div p").load("/update/temp.php"); function addOne() { var number = parseInt($("#div p").html()); return number + 1; } setInterval(function () { $("#div p").text(addOne()); }, 1000); setInterval(function () { $("#geupdate p").load("/update/temp.php");} ,10000); }); So this grabs a a UNIX timestamp from temp.php and puts into into #div p, and then adds 1 to it every second, and then every 10 seconds it will check the original file to keep it up to speed. My problem is that I need to format this UNIX timestamp into a format such as "1 day 3 hours 56 minutes and 3 seconds ago", while also doing all the incrementation and ajax calls. I'm not very experienced with jquery/javascript, so I might be missing something basic.

    Read the article

  • Visual Studio detaches from application as soon as debugging starts

    - by rwmnau
    I have a web application that I've always been able to run in Visual Studio and it debugs just fine (breakpoints work, I can pause execution, etc). Recently, the behavior changed suddenly, and a few things happen: I start debugging, it lauches IE and loads the application, but after a few seconds (sometimes the page hasn't even displayed yet), Visual Studio acts as if debugging has stopped - I'm able to edit code in VS again, and the "Play" button on the toolbar is enabled. The application continues to run in the IE window just spawned, but I'm not attached to it During this few seconds that VS is "debugging", because it detaches, my breakpoints show as hollow - as if I'm set to "Release" mode and they won't be hit. In fact, I have a breakpoint set in Page_Load, and it skips right by. I've checked, and I'm set to debug mode, though the compile mode dropdown is missing from my toolbar (I checked in the build properties to ensure I was in debug mode). Can anybody shed some light here?

    Read the article

  • Performance problem Loading Dataset inside a virtual.

    - by ShaneH
    Host Configuration: HP EliteBook 8530w 4G Ram Win7 Ultimate 64Bit RC SQL Server 2005 64bit Developer Edition Virtual: Windows Virtual PC 1G Ram allocated Integration Services installed Windows XP 64bit Up to date service packs and .Net framework through 3.5 SP1 Sharing the Gigabit network adapter of the Host I have a simple .Net console application which loads a dataset of approximately 37K rows. Running the application on the host executes in approximately 4 seconds. Running inside the virtual takes 729 seconds. The size of the application grows to about 65Mb when the dataset is finished loading, no calculated columns or event handlers are attached. [edit] I changed the virtual to use a loopback adapter to communicate with the host and performance is now on par with running on hardware. Any ideas as to why it would going over the network adapter be almost 200x longer? TraceRt shows that the connection is only one hop. Thanks, Shane Holder

    Read the article

  • MATLAB Magical Mystery timing behavior

    - by Jacob Lyles
    I am experiencing some very odd timing behavior from a function I wrote. If I wrap my function inside another empty container function, it gets a 3x speedup. > tic; foo(args); toc time elapsed: ~140 seconds >tic; bar(args); toc time elapsed: ~35 seconds Here's the kicker - the definition of bar(): define bar(args) foo(args) end Is there some sort of optimization that gets triggered in MATLAB for nested function calls? Should I be adding a dummy function to every function that I write?

    Read the article

  • Caching result of setUp() using Python unittest

    - by dbr
    I currently have a unittest.TestCase that looks like.. class test_appletrailer(unittest.TestCase): def setup(self): self.all_trailers = Trailers(res = "720", verbose = True) def test_has_trailers(self): self.failUnless(len(self.all_trailers) > 1) # ..more tests.. This works fine, but the Trailers() call takes about 2 seconds to run.. Given that setUp() is called before each test is run, the tests now take almost 10 seconds to run (with only 3 test functions) What is the correct way of caching the self.all_trailers variable between tests? Removing the setUp function, and doing.. class test_appletrailer(unittest.TestCase): all_trailers = Trailers(res = "720", verbose = True) ..works, but then it claims "Ran 3 tests in 0.000s" which is incorrect.. The only other way I could think of is to have a cache_trailers global variable (which works correctly, but is rather horrible): cache_trailers = None class test_appletrailer(unittest.TestCase): def setUp(self): global cache_trailers if cache_trailers is None: cache_trailers = self.all_trailers = all_trailers = Trailers(res = "720", verbose = True) else: self.all_trailers = cache_trailers

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Flash upload problems (FileReferenceList, timeouts, #2038 )

    - by binaryLV
    Hello! I'm having problems with timeouts while trying to upload multiple files by using FileReferenceList. upload() is being called in a loop for all selected files on Event.SELECT event of FileReferenceList, but only 2 files are being uploaded simultaneously (I also see 2 opened sockets that are used for uploading by running netstat -aon | find "127.0.0.1:80"). If uploading of any file is not started in 60 seconds, I get a #2038 error. E.g., if I try to upload three large files (like 500MB each), first two uploads are started immediately, third one is not started (because limit is 2 simultaneous uploads) and it fails after 60 seconds with #2038 error (I'm fairly sure that this is because of timeout - tested it). This could be solved by calling upload() only when uploading previous file is completed, but I don't want to "hard-code" the number of possible simultaneous uploads (2 on my PC). Is there any way to get/set this number at runtime?

    Read the article

  • Generating PDF's via Delayed Job while maintaing a RESTful pattern.

    - by Jeff
    Hi, currently I am running a Rails app on Heroku, and everything is working great with exception of generating PDF documents that sometimes contain thousands of records. Heroku has a built-in timeout of 30 seconds, so if the request takes more than 30 seconds, it's abandoned. That's fine, since they offer delayed_job support built-in. However, all of the PDF's i generate follow a typical restful pattern. For instance, a request to "/posts.pdf" generates a pdf (using PRAWN and PRAWNTO) and it's delivered to the browser. So my basic question is, how do I create dynamically generated PDF's with delayed_job while maintaining the basic RESTful patterns Rail's so conveniently provides. Thanks.

    Read the article

  • Killing a script launched in a Process via os.system()

    - by L.J.
    I have a python script which launches several processes. Each process basically just calls a shell script: from multiprocessing import Process import os import logging def thread_method(n = 4): global logger command = "~/Scripts/run.sh " + str(n) + " >> /var/log/mylog.log" if (debug): logger.debug(command) os.system(command) I launch several of these threads, which are meant to run in the background. I want to have a timeout on these threads, such that if it exceeds the timeout, they are killed: t = [] for x in range(10): try: t.append(Process(target=thread_method, args=(x,) ) ) t[-1].start() except Exception as e: logger.error("Error: unable to start thread") logger.error("Error message: " + str(e)) logger.info("Waiting up to 60 seconds to allow threads to finish") t[0].join(60) for n in range(len(t)): if t[n].is_alive(): logger.info(str(n) + " is still alive after 60 seconds, forcibly terminating") t[n].terminate() The problem is that calling terminate() on the process threads isn't killing the launched run.sh script - it continues running in the background until I either force kill it from the command line, or it finishes internally. Is there a way to have terminate also kill the subshell created by os.system()?

    Read the article

  • CGContextShowTextAtPoint: invalid context

    - by coure06
    I want to call a method responsible for drawing text on screen after each 5 seconds. Here is my code -(void) handleTimer: (NSTimer *)timer { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, currentColor.CGColor); CGContextTranslateCTM(context, 145.0, 240.0); CGContextScaleCTM(context, 1.0, -1.0); CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman); CGContextSetCharacterSpacing(context, 1); CGContextSetTextDrawingMode(context, kCGTextFillStroke); CGContextSetRGBStrokeColor(context, 0.5,0.5,1,1); CGContextShowTextAtPoint(context, 100, 100, "01", 2); } But after 5 seconds when this method is called i am getting this error CGContextShowTextAtPoint: invalid context Another thing is how to show a thinner font?

    Read the article

  • How to deal with Rounding-off TimeSpan?

    - by infant programmer
    I take the difference between two DateTime fields, and store it in a TimeSpan variable, Now I have to round-off the TimeSpan by the following rules: if the minutes in TimeSpan is less than 30 then Minutes and Seconds must be set to zero, if the minutes in TimeSpan is equal to or greater than 30 then hours must be incremented by 1 and Minutes and Seconds must be set to zero. TimeSpan can also be a negative value, so in that case I need to preserve the sign.. I could be able to achieve the requirement if the TimeSpan wasn't a negative value, though I have written a code I am not happy with its inefficiency as it is more bulky .. Please suggest me a simpler and efficient method. Thanks regards,

    Read the article

  • SQL query duration is longer for smaller dataset?

    - by entens
    I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results. Quick query (averages 4 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC Long query (averages 1 minute 20 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should. Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?

    Read the article

  • Python - calendar.timegm() vs. time.mktime()

    - by ibz
    I seem to have a hard time getting my head around this. What's the difference between calendar.timegm() and time.mktime()? Say I have a datetime.datetime with no tzinfo attached, shouldn't the two give the same output? Don't they both give the number of seconds between epoch and the date passed as a parameter? And since the date passed has no tzinfo, isn't that number of seconds the same? >>> import calendar >>> import time >>> import datetime >>> d = datetime.datetime(2010, 10, 10) >>> calendar.timegm(d.timetuple()) 1286668800 >>> time.mktime(d.timetuple()) 1286640000.0 >>>

    Read the article

  • Magento cache wrong read permissions?

    - by Lucasmus
    There seems to be a problem in Magento's reading of the var/cache directory. I've disabled Full Page Caching for testing. When I execute the bash command chmod -R 777var/cache/` before loading the page, it loads ~3 seconds quicker (the time it takes before 'mage::dispatch::routers_match' is reached in the Profiler is reduced from ~4 seconds to ~1 second). This speed-up remains a while but then is lost until the chmod is called again. I'm guessing this has to do with writing permissions somehow? The odd thing is, the cache contents are afaik owned by the process that is executing magento (the web user). Does anyone have any clues what could be the problem or what could be changed to prevent this?

    Read the article

  • JFrame does not refresh after deleting an image

    - by dajackal
    Hi! I'm working for the first time with images in a JFrame, and I have some problems. I succeeded in putting an image on my JFrame, and now i want after 2 seconds to remove my image from the JFrame. But after 2 seconds, the image does not disappear, unless I resize the frame or i minimize and after that maximize the frame. Help me if you can. Thanks. Here is the code: File f = new File("2.jpg"); System.out.println("Picture " + f.getAbsolutePath()); BufferedImage image = ImageIO.read(f); MyBufferedImage img = new MyBufferedImage(image); img.resize(400, 300); img.setSize(400, 300); img.setLocation(50, 50); getContentPane().add(img); this.setSize(600, 400); this.setLocationRelativeTo(null); this.setVisible(true); Thread.sleep(2000); System.out.println("2 seconds over"); getContentPane().remove(img); Here is the MyBufferedImage class: public class MyBufferedImage extends JComponent{ private BufferedImage image; private int nPaint; private int avgTime; private long previousSecondsTime; public MyBufferedImage(BufferedImage b) { super(); this.image = b; this.nPaint = 0; this.avgTime = 0; this.previousSecondsTime = System.currentTimeMillis(); } @Override public void paintComponent(Graphics g) { Graphics2D g2D = (Graphics2D) g; g2D.setColor(Color.BLACK); g2D.fillRect(0, 0, this.getWidth(), this.getHeight()); long currentTimeA = System.currentTimeMillis(); //g2D.drawImage(this.image, 320, 0, 0, 240, 0, 0, 640, 480, null); g2D.drawImage(image, 0,0, null); long currentTimeB = System.currentTimeMillis(); this.avgTime += currentTimeB - currentTimeA; this.nPaint++; if (currentTimeB - this.previousSecondsTime > 1000) { System.out.format("Drawn FPS: %d\n", nPaint++); System.out.format("Average time of drawings in the last sec.: %.1f ms\n", (double) this.avgTime / this.nPaint++); this.previousSecondsTime = currentTimeB; this.avgTime = 0; this.nPaint = 0; } } }

    Read the article

  • How to use Sleep in the application in iphone

    - by Pugal Devan
    Hi, I have used to loading a default image in my appication. So i have set to, Sleep(3); in my delegate.m method. But sometimes it will take more than 6 to 7 minutes. So i want to display the image 3 seconds only and then it goes to my appilcation based on my requirements. Which one is best way to do that? Sleep(3) or [NSThread sleepForTimeInterval:3.0] or something else; And i must display the image 3 seconds only. Please explain me. (Note: And I declared setter and getter methods only in my deleagte class.) Please explain me.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >