Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 638/2465 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Run java thread at specific times

    - by rmarimon
    I have a web application that synchronizes with a central database four times per hour. The process usually takes 2 minutes. I would like to run this process as a thread at X:55, X:10, X:25, and X:40 so that the users knows that at X:00, X:15, X:30, and X:45 they have a clean copy of the database. It is just about managing expectations. I have gone through the executor in java.util.concurrent but the scheduling is done with the scheduleAtFixedRate which I believe provides no guarantee about when this is actually run in terms of the hours. I could use a first delay to launch the Runnable so that the first one is close to the launch time and schedule for every 15 minutes but it seems that this would probably diverge in time. Is there an easier way to schedule the thread to run 5 minutes before every quarter hour?

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Creating database desktop application with data manipulation in Netbeans using Java Persistence

    - by Lulu
    It's my first time to use Persistence in developing a Java program because I usually connect via JDBC. I read that for large amounts of data, it is best to use persistence. I tried playing with the CRUD example of Netbeans. It's not very helpful thought because it only connects to the DB and allows addition and deletion of records. I need something that will allow me to manipulate the data like if the value from column C1 of table T1 is such, it will retrieve data from table t2. In short, I need to apply conditions before knowing what to retrieve exactly. The example in CRUD example already has a specific table to retrieve and only acts like a database manager. How is it possible to retrieve a specific item first then from this, will determine the next steps to be done. I'm also using embedded JavaDB/Derby as my database (also my first time to use because I usually use remote mysql)

    Read the article

  • Multiple calls to AlarmManager.setRepeating deliver the same Intent/PendingIntent extra values, but

    - by Chris Boyle
    Solved while writing this question, but posting in case it helps anyone: I'm setting multiple alarms like this, with different values of id: AlarmManager alarms = (AlarmManager)context.getSystemService( Context.ALARM_SERVICE); Intent i = new Intent(MyReceiver.ACTION_ALARM); // "com.example.ALARM" i.putExtra(MyReceiver.EXTRA_ID, id); // "com.example.ID", 2 PendingIntent p = PendingIntent.getBroadcast(context, 0, i, 0); alarms.setRepeating(AlarmManager.RTC_WAKEUP, nextMillis, 300000, p); // 5 mins ...and receiving them like this: public void onReceive(Context context, Intent intent) { if (intent.getAction().equals(ACTION_ALARM)) { // It's time to sound/show an alarm final long id = intent.getLongExtra(EXTRA_ID, -1); The alarm is delivered to my receiver at the right times, but often with EXTRA_ID set to the wrong value: it's a value that I have used at some point, just not the one that I wanted delivered at that particular time.

    Read the article

  • Java: Netbeans debugging session works faster than normal run

    - by Martijn Courteaux
    Hello, I'm making Braid in Netbeans 6.7.1. Computer Spec: Windows 7 Running processes: 46 Running threads: +/- 650 NVidia GeForce 9200M GS Intel Core 2 Duo CPU P8400 @ 2.26Ghz Game-spec with normal run: Memory: between 80 MB and 110 MB CPU: between 9% and 20% CPU when time rewinding: 90% The same values for the debugging session, except when I rewind the time: CPU: 20%. Is there any reason for? Is there a way to reach the same performance with a normal run. This is my repaint code: @Override public void repaint() { BufferStrategy bs = getBufferStrategy(); // numBuffers: 4 Graphics g = bs.getDrawGraphics(); g.setColor(Color.BLACK); g.fillRect(-1, -1, 2000, 2000); gamePanel.paint(g.create(x, y, gameDim.width, gameDim.height)); bs.show(); g.dispose(); Toolkit.getDefaultToolkit().sync(); update(g); } The game runs in fullscreen (undecorated + frame.size = screensize) Martijn

    Read the article

  • Using Qtcreator (with Qwt), really basic stuff

    - by Dago
    I'm trying to make a Qt program using Qtcreator and Qwt for plotting. I've never used Qtcreator before. I've created a mainwindow and added a Qwtplot widget there (object name: qwtPlot). The widget shows up in the program when I compile and run it. But theres no mention of the qwtPlot object anywhere in the (autogenerated) code so I assume it is being added at compile time from the .ui xml file (or something). My question is that... how do I modify/change the qwtPlot object? Or where should I place the code. I'm having a hard time articulating my question but the question basically is "How do I do anything with the qwtPlot widget which is created (graphically) with Qtcreator?". I've checked some tutorials but in the tutorials they add the widgets manually in the code, but I'd like to use Qtcreator (because my UI will be fairly complicated). This whole Qtcreator is pretty confusing...

    Read the article

  • Play Sound while UIImageView animating crashes the app?

    - by Rahul Vyas
    Hello all, I am having a strange problem in my app. I am creating a app in which i have 9 buttons on the view i want to play sound and animation same time. I am using UIImageview for this but i am having starnge behaviour sometimes sound play early and then animation runs.So how do i play them at same time? I am also having crash on device on a specific animation in which i have 60 jpg files. So what's the best way to do this?

    Read the article

  • HTTP vs FTP upload

    - by Richard Knop
    I am building a large website where members will be allowed to upload content (images, videos) up to 20MB of size (maybe a little less like 15MB, we haven't settled on a final upload limit yet but it will be somewhere between 10-25MB). My question is, should I go with HTTP or FTP upload in this case. Bear in mind that 80-90% of uploads will be smaller size like cca 1-3MB but from time to time some members will also want to upload large files (10MB+). Is HTTP uploading reliable enough for such large files or should I go with FTP? Is there a noticeable speed difference between HTTP and FTP while uploading files? I am asking because I'm using Zend Framework which already has HTTP adapter for file uploads, in case I choose FTP I would have to write my own adapter for it. Thanks!

    Read the article

  • C# Value member property repopulate the control..

    - by karthik
    Hi.. I just wanted to confirm couple of things. I) Code snippet: cmb1.Datasource= dt; cmb1.Valuemember = "value"; Here data population happens 2 time for the control, 1 More time extra,bcoz of value member getting changed after data source assigned. Is it true? II) How can I trace these re population in C#? I just wanted to debug and see and confirm? example please? Thanks Karthik

    Read the article

  • Multiple lines of text to a single map

    - by steven
    I've been trying to use Hadoop to send N amount of lines to a single mapping. I don't require for the lines to be split already. I've tried to use NLineInputFormat, however that sends N lines of text from the data to each mapper one line at a time [giving up after the Nth line]. I have tried to set the option and it only takes N lines of input sending it at 1 line at a time to each map: job.setInt("mapred.line.input.format.linespermap", 10); I've found a mailing list recommending me to override LineRecordReader::next, however that is not that simple, as that the internal data members are all private. I've just checked the source for NLineInputFormat and it hard codes LineReader, so overriding will not help. Also, btw I'm using Hadoop 0.18 for compatibility with the Amazon EC2 MapReduce.

    Read the article

  • GDB hardware watchpoint very slow - why?

    - by Laurynas Biveinis
    On a large C application, I have set a hardware watchpoint on a memory address as follows: (gdb) watch *((int*)0x12F5D58) Hardware watchpoint 3: *((int*)0x12F5D58) As you can see, it's a hardware watchpoint, not software, which would explain the slowness. Now the application running time under debugger has changed from less than ten seconds to one hour and counting. The watchpoint has triggered three times so far, the first time after 15 minutes when the memory page containing the address was made readable by sbrk. Surely during those 15 minutes the watchpoint should have been efficient since the memory page was inaccessible? And that still does not explain, why it's so slow afterwards. The GDB is $ gdb --version GNU gdb (GDB) 7.0-ubuntu [...] Thanks in advance for any ideas as what might be the cause or how to fix/work around it.

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Setting the Identity/Principal from a MessageInspector in WCF

    - by Robert Wagner
    I am developing a WCF service that receives the user's credentials in the SOAP header. These credentials are read on the server side using a MessageInspector. So far so good. I want to set the Thread.CurrentPrincipal to a custom principal (CustomPrincipal), but when I do this from the MessageInspector, it gets overridden by the time the service is invoked. When is the best time to set the principal? Also what is the best way to pass the principal, identity or credentials from the inspector to that location?

    Read the article

  • How to prevent traffic to/from a slow Cassandra node using Python

    - by Sergio Ayestarán
    Intro: I have a Python application using a Cassandra 1.2.4 cluster with a replication factor of 3, all reads and writes are done with a consistency level of 2. To access the cluster I use the CQL library. The Cassandra cluster is running on rackspace's virtual servers. The problem: From time to time one of the nodes can become slower than usual, in this case I want to be able to detect this situation and prevent making requests to the slow node and if possible to stop using it at all (this should theoretically be possible since the RF is 3 and the CL is 2 for every single request). The questions: What's the best way of detecting the slow node from a Python application? Is there a way to stop using one of the Cassandra nodes from Python in this scenario without human intervention? Thanks in advance!

    Read the article

  • java updating text area

    - by n0ob
    for my application I have to build a little customized time ticker which ticks over after whatever delay I tell it to and writes the new value in my textArea. The problem is that the ticker is running fully until the termination time and then printing all the values. How can I make the text area change while the code is running. while(tick<terminationTime){ if ((System.currentTimeMillis()) > (msNow + delay)){ msNow = System.currentTimeMillis(); tick = tick + 1; currentTime.setText(""+tick); sourceTextArea.append(""+tick+" " + System.currentTimeMillis() +" \n"); } currentTime and sourceTextArea are both text areas and both are getting updated after the while loop ends.

    Read the article

  • What might cause SQL error Cannot find the object "dbo.InspectionEvents" because it does not exist o

    - by mrtrombone
    Hi My ASP.Net app is periodically getting the error 'Cannot find the object dbo."XXXX" because it does not exist or you do not have permissions' when it tries to execute a specific stored procedure that writes to the database. I have seen a few forum posts about this issue but the strange thing is that the method works fine almost all of the time, just every now and then I see it in my error logs. Can anyone tell me why this might works Ok most of the time but occassionally fire the error? Application is C# using Enterprise Library 4.1 Data Access. Database is SQL Server 2005 Cheers

    Read the article

  • How do quickly search through a .csv file in Python

    - by Baldur
    I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry. Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful. Could I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with "b" I only search from the line that includes the first word beginning with "b" to the line that includes the last word beginning with "b") I'm using import csv. (a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line) Edit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?

    Read the article

  • To Wrap or Not to Wrap: Wrapping Data Access in a Service Facade

    - by PureCognition
    For a while now, my team and I have been wrapping our data access layer in a web service facade (using WCF) and calling it from the business logic layer. Meanwhile, we could simply use the repository pattern where the business logic layer consumes the data access layer locally through an interface, and at any point in time, we can switch things out for it to hit a service instead (if necessary). The question is: When is it a good time to wrap the data access layer in a service facade and when isn't it? Right now, it seems like the main advantage is that other applications can consume the service, but if they are internal applications written in .NET then they can just consume the .NET assembly instead. Are there other advantages of having the DAL be wrapped in a service that I am unaware of?

    Read the article

  • What do you do when every possible business idea is already taken?

    - by jonathanconway
    I have a bit of free time and lots of enthusiasm for software and the web. I want to make a start-up, to sell kind of product or online service, but I'm having a hard time coming up with business ideas that haven't already been implemented. For example, I thought of making an e-ordering website for ordering food from restaurants online. Good thing I typed it into Google, because the market is already full of hundreds of websites doing the same thing and competing heavily. The same thing has happened with so many other business ideas I've become excited and passionate about - they're all taken. What's your response to this? Do you agree that all the good ideas seem to be taken? Or do you think there is room for new businesses, and that I'm just not thinking (or looking) hard enough? Have you ever tried idea after idea, only to find that it was already being done, and you had to move onto something else?

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Ruby - Possible to pass a block as a param as an actual block to another function?

    - by Markus O'Reilly
    This is what I'm trying to do: def call_block(in_class = "String", &block) instance = eval("#{in_class}.new") puts "instance class: #{instance.class}" instance.instance_eval{ block.call } end # --- TEST EXAMPLE --- # This outputs "class: String" every time "sdlkfj".instance_eval { puts "class: #{self.class}" } # This will only output "class: Object" every time # I'm trying to get this to output "class: String" though call_block("String") { puts "class: #{self.class}" } On the line where it says "instance.instance_eval{ block.call }", I'm trying to find another way to make the new instance variable run instance eval on the block. The only way I can think of to get it to do that is to pass instance_eval the original block, not as a variable or anything, but as a real block like in the test example. Any tips?

    Read the article

  • Concurency problem with Isolation - read-committed

    - by Ratn Deo--Dev
    I have to write a simple demo for amount withdrawl from a joint Bank amount .Andy and Jen holds a joint bank account with number 123 . Suppose they have 100$ in their account .Jen and Andy are operating their account at the same time and both are trying to withdraw 90$ at the time being .My transaction Isolation is set to read-committed and both are able to withdraw money leaving the balance to -(minus)80$ although I have constraint that balance should never be less than 0. I am using hibernate .Is versioning only way to solve this problem or I should go for another Isolation level ?

    Read the article

  • flex3 Format date without timezone

    - by Maurits de Boer
    I'm receiving a date from a server in milliseconds since 1-1-1970. I then use the DateFormatter to print the date to the screen. However, Flex adds timedifference and thus it displays a different time than what I got from the server. I've fixed this by changing the date before printing to screen. But I think that's a bad solution because the date object doesn't hold the correct date. Does anyone know how to use the dateFormatter to print the date, ignoring the timezone? this is how I did it: function getDateString(value:Date):String { var millisecondsPerMinute:int = 1000*60; var newDate:Date = new Date(value.time - (millisecondsPerMinute*value.timezoneOffset)); var dateFormatter:DateFormatter = new DateFormatter(); dateFormatter.formatString = "EEEE DD-MM-YYYY LL:MM AA"; return dateFormatter.format(newDate); }

    Read the article

  • Best practices for using memcached in Rails?

    - by Matt
    Hello everybody, as database transcations in our app are getting more and more time consuming, we have started to use memcached to reduce the amount of queries passed to MySQL. All in all, it works fine and really saves a lot of time. But as caching was "silently appearing" as a workaround to give the app more juice, a lot of our models now contain code like this: def self.all_cached Rails.cache.fetch('object_name') { find( :all, :include => [associations]) } end This is getting more and more a pain as filling and flushing the cache happens in several classes accross the application. Now, I was wondering if there was a better way to abstract memcached logic to make it more powerful and easy to use across all needed models? I was thinking about having some kind of memcached-module which is included in all needed modules. But before playing around, I thought: Let's ask experts first :-) Thanks Matt

    Read the article

  • Pause and Resume AsyncTasks? (Android)

    - by Matt Swanson
    I have an AsyncTask that acts as a countdown timer for my game. When it completes the countdown it displays the out of time end screen and it also updates the timer displayed on the screen. Everything works fine, except I need to be able to pause and resume this when the pause button in the game is pressed. If I cancel it and try to re-execute it, it crashes with an IllegalStateException. If I cancel it and instantiate a new AsyncTask in its place the old one begins to run again and the new one runs at the same time. Is there a way to cancel/pause the timer and restart it using AsyncTasks or is there a different way I should be going about doing this?

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >