Search Results

Search found 2342 results on 94 pages for 'valter minute'.

Page 77/94 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Querying datetime.datetime on appengine acts different then dev server help!

    - by Alon Carmel
    Hey, I'm having some trouble with stuff that work locally and dont work on the app engine python environment: Basically, i want to get a program from an epg between ranges of date and time. i know i cannot do two where < so i saw a suggestion to save the dates as list as datetime.datetime which i did. [datetime.datetime(2010, 5, 10, 14, 25), datetime.datetime(2010, 5, 10, 15, 0)] This is ok. but when i try to compare to it: progranon = get_object(Programs2Channel, 'channel_id =', channelobj.key(), 'endstartdate >', programstart_minex, 'endstartdate <', programstart_minex ) This for some reason works locally, but fails to retrieve the data on the app engine. *Im using Google app engine django patch which uses the get_object to retrieve data in transactions. Please help. Here are more details: this is the LIST: [datetime.datetime(2010, 5, 13, 10, 45), datetime.datetime(2010, 5, 13, 11, 30)] #this is the query: programstart = ""+year+"-"+month+"-"+day+" "+hour+":"+minute programstart_minex = datetime.strptime(programstart, "%Y-%m-%d %H:%M") progranon = Programs2Channel.gql('WHERE channel_id = :channelid AND endstartdate > :programstartx AND endstartdate < :programstartx',channelid = channelobj.key(),programstartx=programstart_minex).get()

    Read the article

  • Timer Service in ejb 3.1 - schedule calling timeout problem

    - by Greg
    Hi Guys, I have created simple example with @Singleton, @Schedule and @Timeout annotations to try if they would solve my problem. The scenario is this: EJB calls 'check' function every 5 secconds, and if certain conditions are met it will create single action timer that would invoke some long running process in asynchronous fashion. (it's sort of queue implementation type of thing). It then continues to check, but as long as long running process is there it won't start another one. Below is the code I came up with, but this solution does not work, because it looks like asynchronous call I'm making is in fact blocking my @Schedule method. @Singleton @Startup public class GenerationQueue { private Logger logger = Logger.getLogger(GenerationQueue.class.getName()); private List<String> queue = new ArrayList<String>(); private boolean available = true; @Resource TimerService timerService; @Schedule(persistent=true, minute="*", second="*/5", hour="*") public void checkQueueState() { logger.log(Level.INFO,"Queue state check: "+available+" size: "+queue.size()+", "+new Date()); if (available) { timerService.createSingleActionTimer(new Date(), new TimerConfig(null, false)); } } @Timeout private void generateReport(Timer timer) { logger.info("!!--timeout invoked here "+new Date()); available = false; try { Thread.sleep(1000*60*2); // something that lasts for a bit } catch (Exception e) {} available = true; logger.info("New report generation complete"); } What am I missing here or should I try different aproach? Any ideas most welcome :) Testing with Glassfish 3.0.1 latest build - forgot to mention

    Read the article

  • Python GUI does not update until entire process is finished

    - by ccwhite1
    I have a process that gets a files from a directory and puts them in a list. It then iterates that list in a loop. The last line of the loop being where it should update my gui display, then it begins the loop again with the next item in the list. My problem is that it does not actually update the gui until the entire process is complete, which depending on the size of the list could be 30 seconds to over a minute. This gives the feeling of the program being 'hung' What I wanted it to do was to process one line in the list, update the gui and then continue. Where did I go wrong? The line to update the list is # Populate listview with drive contents. The print statements are just for debug. def populateList(self): print "populateList" sSource = self.txSource.Value sDest = self.txDest.Value # re-intialize listview and validated list self.listView1.DeleteAllItems() self.validatedMove = None self.validatedMove = [] #Create list of files listOfFiles = getList(sSource) #prompt if no files detected if listOfFiles == []: self.lvActions.Append([datetime.datetime.now(),"Parse Source for .MP3 files","No .MP3 files in source directory"]) #Populate list after both Source and Dest are chosen if len(sDest) > 1 and len(sDest) > 1: print "-iterate listOfFiles" for file in listOfFiles: sFilename = os.path.basename(file) sTitle = getTitle(file) sArtist = getArtist(file) sAlbum = getAblum(file) # Make path = sDest + Artist + Album sDestDir = os.path.join (sDest, sArtist) sDestDir = os.path.join (sDestDir, sAlbum) #If file exists change destination to *.copyX.mp3 sDestDir = self.defineDestFilename(os.path.join(sDestDir,sFilename)) # Populate listview with drive contents self.listView1.Append([sFilename,sTitle,sArtist,sAlbum,sDestDir]) #populate list to later use in move command self.validatedMove.append([file,sDestDir]) print "-item added to SourceDest list" else: print "-list not iterated"

    Read the article

  • iPod library song path access

    - by Narendra Kumar
    I studied a lot but did not find any good answer. My problem is i am calculating beats per minute of song.I used Bass api for that, now problem is i am able to get bpm of a file which i have in my resource folder but i have to get bpm of all songs of iPod library. I am getting path of song from MPMediaItemPropertyAssetURL property of MpMediaItem but when passing this one in api api is saying stream cant load BASS_StreamCreateFile(). In my point of view i am not getting right path of song. How can we access valid path? Did any one access ipod library song with external api? Please help me . Thanks CODE IS THIS NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL]; NSString *respath = [NSString stringWithFormat:@"%@",[assetURL absoluteString]]; BASS_SetConfig(BASS_CONFIG_IOS_MIXAUDIO, 0); // Disable mixing. To be called before BASS_Init. if (HIWORD(BASS_GetVersion()) != BASSVERSION) { NSLog(@"An incorrect version of BASS was loaded"); } // Initialize default device. if (!BASS_Init(-1, 44100, 0, NULL, NULL)) { //textView.text = [NSString stringWithFormat:@"%@ CAN'T Load Stream",textView.text]; } DWORD chan1; if(!(chan1=BASS_StreamCreateFile(FALSE, [respath UTF8String], 0, 0, BASS_SAMPLE_LOOP))) { NSLog(@"Can't load stream!"); textView.text = [NSString stringWithFormat:@"%@ not loading...",textView.text]; } mainStream=BASS_StreamCreateFile(FALSE, [respath cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_SAMPLE_FLOAT|BASS_STREAM_PRESCAN|BASS_STREAM_DECODE); float playBackDuration=BASS_ChannelBytes2Seconds(mainStream, BASS_ChannelGetLength(mainStream, BASS_POS_BYTE)); NSLog(@"Play back duration is %f",playBackDuration); HSTREAM bpmStream=BASS_StreamCreateFile(FALSE, [respath UTF8String], 0, 0, BASS_STREAM_PRESCAN|BASS_SAMPLE_FLOAT|BASS_STREAM_DECODE); //BASS_ChannelPlay(bpmStream,FALSE); BpmValue= BASS_FX_BPM_DecodeGet(bpmStream,0.0, playBackDuration, MAKELONG(45,256), BASS_FX_BPM_MULT2| BASS_FX_BPM_MULT2 | BASS_FX_FREESOURCE, (BPMPROCESSPROC*)proc); textView.text = [NSString stringWithFormat:@"%@ %f",textView.text,BpmValue];

    Read the article

  • Zend_Date and Zend Locale; can't get it to work ><

    - by Rick de Graaf
    Loving Zend Framework, hating Zend_Date... I'm buidling an app where one of the functions is to track the time one spends on a certain task. This works great. My previous question was my incapability to get the sum of all the timestamps (time spent on each task). Well that works as a charm, but when I echo this value, it adds one hour. Using gmdate() (just for checking) and the value turns out exactly as it's supposed to. I thought to solve this problem easeliy with Zend_Local, but I can't get the damn thing to work! This is my bootstrap code: protected function _initLocale() { $locale = new Zend_Locale('nl_NL'); $locale->setLocale($locale); Zend_Registry::set('Zend_Locale', $locale); } This is the code in my view file: $date = new Zend_Date($this->timequery, null, $locale); echo $date->toString(Zend_Date::HOUR.':'.Zend_Date::MINUTE.':'.Zend_Date::SECOND); The timestamp I'm processing is: 2632 which equals 00:43:52. The output I get is: 01:43:52 I know the extra hour comes from the time difference, but I can seem to solve this with Zend_Local and Zend_Date.

    Read the article

  • UITouch Events and Table Views

    - by Andy
    I'm working on a navigation-based iPhone-only app that serves two main purposes: One, to present data in a hierarchical view, allowing users to drill down and eventually edit said data, and, two, to all users to perform a default action when the table view cell is tapped. I now need to offer a small set of options tied to the same data; however, both the didSelectRowAtIndexPath: and accessoryButtonTappedForRowAtIndexPath: methods are obviously taken. So, my options seem to be to implement a double-tap method, wherein the small list of additional options would be presented after (you guessed it) a double-tap on said table row; or, preferably, a tap-and-hold method. From what I can tell, tap-and-hold seems like the way to go in SDK 4.0 - which does me no good right this red-hot minute. I decided to go with the double-tap option, but I'm having a little trouble. First and foremost, the touchesBegan:withEvent: method does not seem to be getting called at all; a breakpoint placed within the method is never called while the application runs, and the table view responds exactly as it did before I inserted the method (which is to say, it performs the default action): - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *aTouch = [touches anyObject]; if (aTouch.tapCount == 2) { [NSObject cancelPreviousPerformRequestsWithTarget:self]; } } Second, I don't really need to handle a single-tap - the didSelectRowAtIndexPath: method can handle the single-tap just fine. The double-tap is the funky one I want to handle. I suspect the answer is going to contain the phrase, "You can't have the table view handle the single-tap and the touchesBegan: method handle the double-tap. The touch handling methods have to handle all of them." I would really appreciate some guidance from some of you who've dealt with this issue. Thanks in advance.

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Running mysql query using node blocks the whole process and then timesout

    - by lobengula3rd
    I have a node javascript that uses mysql npm (Felix). I have a procedure stored in my DB which I call when the user selects an option to kind of create its own instance of the program. The user chooses for how long he wants that data to be initialized for him. This is suppsoed to be between 1 and 2 years. So if he choose 1 year this query will insert around 20,000 rows into 1 table. If I run this query and a local DB this takes around 30 seconds (I suppose it is reasonable because its a big query which should be done only once in 1 or 2 years so its ok). For some reason my node script freezes as if it can't handle any more calls from other users. The even worse problem is that after like 2 minutes my client ui gets like an error from the server. At this point not all the data that was supposed to enter the DB is entered. After waiting like another minute all the data finally gets to the DB and only then it will accept new requests. This is my connection: this.connection = mysql.createConnection({ host : '********rds.amazonaws.com', user : 'admin', password : '******', database : '*****' }); and this is my query function: this.createCourts = function (req, res, next){ connection.query('CALL filldates("' + req.body['startDate'] + '","' + req.body['endDate'] + '","' + req.body['numOfCourts'] + '","' + req.body['duration'] + '","' + req.body['sundayOpen'] + '","' + req.body['mondayOpen'] + '","' + req.body['tuesdayOpen'] + '","' + req.body['wednesdayOpen'] + '","' + req.body['thursdayOpen'] + '","' + req.body['fridayOpen'] + '","' + req.body['saturdayOpen'] + '","' + req.body['sundayClose'] + '","' + req.body['mondayClose'] + '","' + req.body['tuesdayClose'] + '","' + req.body['wednesdayClose'] + '","' + req.body['thursdayClose'] + '","' + req.body['fridayClose'] + '","' + req.body['saturdayClose'] + '");', function(err){ if (err){ console.log(err); } else return res.send(200); }); }; what am i missing here? as i understand connection.query should by async so why is it actually blocking my node script? thanks.

    Read the article

  • PHP Missing Function In Older Version

    - by Umair Ashraf
    My this PHP function converts a datetime string into more readable way to represent passed date and time. This is working perfect in PHP version 5.3.0 but on the server side it is PHP version 5.2.17 which lacks this function. Is there a way I can fix this efficiently? This is not only a function which needs this "diff" function but there are many more. public function ago($dt1) { $interval = date_create('now')->diff(date_create($dt1)); $suffix = ($interval->invert ? ' ago' : '-'); if ($v = $interval->y >= 1) return $this->pluralize($interval->y, 'year') . $suffix; if ($v = $interval->m >= 1) return $this->pluralize($interval->m, 'month') . $suffix; if ($v = $interval->d >= 1) return $this->pluralize($interval->d, 'day') . $suffix; if ($v = $interval->h >= 1) return $this->pluralize($interval->h, 'hour') . $suffix; if ($v = $interval->i >= 1) return $this->pluralize($interval->i, 'minute') . $suffix; return $this->pluralize($interval->s, 'second') . $suffix; }

    Read the article

  • Correct Time Display

    - by Matthew
    Guys, I''m looking to get this correct and i'm getting a bit fustrated with this. What I want to do is get hours and days and weeks correct. Example: if this post is < 60min old then have it read: Posted Less then 1 minute ago if this post is < 120min old then have it read: Posted 1 hour ago if this post is 120min old then have it read: Posted 1 hours ago if this post is < 1440min old then have it read: Posted 1 day ago if this post is 1440min old then have it read: Posted 2 days ago Is that right?? This is what I have so far: if (lapsedTime < 60) { return '< 1 mimute'; } else if (lapsedTime < (60*60)) { return Math.round(lapsedTime / 60) + 'minutes'; } else if (lapsedTime < (12*60*60)) { return Math.round(lapsedTime / 2400) + 'hr'; } else if (lapsedTime < (24*60*60)) { return Math.round(lapsedTime / 3600) + 'hrs'; } else if (lapsedTime < (7*24*60*60)) { return Math.round(lapsedTime / 86400) + 'days'; } else { return Math.round(lapsedTime / 604800) + 'weeks'; }

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Switch gettext translated language with original language

    - by Ruben
    Hi everyone, I started my PHP application with all text in German, then used gettext to extract all strings and translate them to English. So, now I have a .po file with all msgids in German and msgstrs in English. I want to switch them, so that my source code contains the English as msgids. There are numerous reasons for this: More translators will know English, so it is only appropriate to serve them up a file with msgids in English. I could always switch the file before I give it out and after I receive it It would help me to write English object & function names and comments if the content text was also English. I'd like to do that, so the project is more open to other Open Source collaborators (more likely to know English than German). I could do this manually and this is the sort of task where I anticipate it will take me more time to write an automated routine for it (because I'm very bad with shell scripts) than do it by hand. But I also anticipate despising every minute of manual computer labour (feels like a oxymoron, right?) like I always do. Has someone done this before? I figured this would be a common problem, but couldn't find anything. Many thanks ahead. Sample Problem: <title><?=_('Routinen')?></title> #: /users/ruben/sites/v/routinen.php:43 msgid "Routinen" msgstr "Routines"

    Read the article

  • jQuery: appendTo parent

    - by Kenneth B
    Hi guys I can't seem to get the appendTo to work. What do I do wrong? $('div:nth-child(2n) img').appendTo(parent); Current markup: <div class="container"> <img src="123.jpg" /> <p>Hey</p> </div> <div class="container"> <img src="123.jpg" /> <p>Hey</p> </div> I want this output: <div class="container"> <p>Hey</p> <img src="123.jpg" /> </div> <div class="container"> <p>Hey</p> <img src="123.jpg" /> </div> Please help me guys... I'm tearing my hair of every minute.. :-S

    Read the article

  • How can I convert seconds to minutes in jQuery while updating an element with the current time?

    - by pghtech
    So I see a number of ways to display allot of seconds in a (static) hr/min/sec. However, I am trying to produce a visual count down timer: $('#someelement').html(minCounter + ' minutes ' + ((secCounter == 0) ? '' : (secCounter + ' seconds'))); My counter is reduced inside a SetInterval that triggers ever 1 second: //....... var counter = redirectTimer; jQuery('#WarningDialogMsg').html(minCounter + ' minutes ' + ((secCounter == 0) ? '' : (secCounter + ' seconds'))); //........ SetInternval( function() { counter -= 1; secCounter = Math.floor(counter % 60); minCounter = Math.floor(counter / 60); //....... $('#someelement').html(minCounter + ' minutes ' + ((secCounter == 0) ? '' : (secCounter + ' seconds'))); }, 1000) It is a two minute counter but I don't want to display 120 seconds. I want to display 1 : 59 (and counting down). I have managed to get it to work using the above, but my main question is: is there a more elegant way to accomplish the above? (note: I am redirecting once "counter == 0").

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Adroid's DateFormat replacement - missing the format() with FieldPosition

    - by user331244
    Hi, I need to split a date string into pieces and I'm doing it using the public final StringBuffer format (Object object, StringBuffer buffer, FieldPosition field) from the java.text.DateFormat class. However, the implementation of this function is really slow, hence Android has an own implementation in android.text.format.DateFormat. BUT, in my case, I want to extract the different pieces of the date string (year, minute and so on). Since I need to be locale independent, I can not use SimpleDateFormat and custom strings. I do it as follows: Calendar c = ... // find out what field to extract int field = getField(); // Create a date string Field calendarField = DateFormat.Field.ofCalendarField(field); FieldPosition fieldPosition = new FieldPosition(calendarField); StringBuffer label = new StringBuffer(); label = getDateFormat().format(c.getTime(), label, fieldPosition); // Find the piece that we are looking for int beginIndex = fieldPosition.getBeginIndex(); int endIndex = fieldPosition.getEndIndex(); String asString = label.substring(beginIndex, endIndex); For some reason, the format() overload with the FieldPosition argument is not included in the android platform. Any ideas of how to do this in another way? Is there any easy way to tokenize the pattern string? Any other ideas?

    Read the article

  • Building asynchronous cache pattern with JSP

    - by merweirdo
    I have a JSP that will take some 8 minutes to render. The code logic itself can not be made more efficient (it will update often and be updated by basically a pointy haired boss). I tried wrapping it with a caching layer like <%@ taglib uri="/WEB-INF/classes/oscache.tld" prefix="oscache" %> <oscache:cache time="60"> <div class="pagecontent"> ..... my logic </div> </oscache:cache> This is nice until the 60 seconds is over. The next query after that blocks until the 8 minutes of rendering is done with again. I would need a way to build a pattern something like: If there is no version of the dynamic content in the cache run the actual logic (and populate the cache for subsequent requests) If there is a non-expired version of the dynamic content in the cache serve the output of the JSP logic from the cache If there is an expired version of the dynamic content in the cache serve the output of the JSP logic still from the cache AND run the JSP logic in the background so that the cache gets updated transparently to the user - avoiding the user have to wait for 8 minutes I found out that at least EHCache might be able to do some asynchronous cache updating but it did not sadly seem to apply to the JSP tags... Also I have to take in 10-20 parameters for the actual logic of the JSP and some of them should be used as a key for caching. Code example and/or pointers would be greatly appreciated. I do not frankly care if the solution provided is extremely ugly. I just want a simple 5 minute caching with asynchronous cache update taking into account some parameters as a key.

    Read the article

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • How does my php function for converting a date to facebook timestamp look? New to PHP.

    - by First2Drown
    any suggestions to make it better? function convertToFBTimestamp($date){ $this_date = date('Y-m-d-H-i-s', strtotime($date)); $cur_date = date('Y-m-d-H-i-s'); list ($this_year, $this_month, $this_day, $this_hour, $this_min, $this_sec) = explode('-',$this_date); list ($cur_year, $cur_month, $cur_day, $cur_hour, $cur_min, $cur_sec) = explode('-',$cur_date); $this_unix_time = mktime($this_hour, $this_min, $this_sec, $this_month, $this_day, $this_year); $cur_unix_time = mktime($cur_hour, $cur_min, $cur_sec, $cur_month, $cur_day, $cur_year); $cur_unix_date = mktime(0, 0, 0, $cur_month, $cur_day, $cur_year); $dif_in_sec = $cur_unix_time - $this_unix_time; $dif_in_min = (int)($dif_in_sec / 60); $dif_in_hours = (int)($dif_in_min / 60); if(date('Y-m-d',strtotime($date))== date('Y-m-d')) { if($dif_in_sec < 60) { return $dif_in_sec." seconds ago"; } elseif($dif_in_sec < 120) { return "about a minute ago"; } elseif($dif_in_min < 60) { return $dif_in_min." minutes ago"; } else { if($dif_in_hours == 1) { return $dif_in_hours." hour ago"; } else { return $dif_in_hours." hours ago"; } } } elseif($cur_unix_date - $this_unix_time < 86400 ) { return strftime("Yesterday at %l:%M%P",$this_unix_time); } elseif($cur_unix_date - $this_unix_time < 259200) { return strftime("%A at %l:%M%P",$this_unix_time); } else { if($this_year == $cur_year) { return strftime("%B, %e at %l:%M%P",$this_unix_time); } else { return strftime("%B, %e %Y at %l:%M%P",$this_unix_time); } } }

    Read the article

  • Accessing global variable in multithreaded Tomcat server

    - by jwegan
    I have a singleton object that I construct like thus: private static volatile KeyMapper mapper = null; public static KeyMapper getMapper() { if(mapper == null) { synchronized(Utils.class) { if(mapper == null) { mapper = new LocalMemoryMapper(); } } } return mapper; } The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put). The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected. I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(

    Read the article

  • Alternatives to LINQ To SQL on high loaded pages

    - by Alex
    To begin with, I LOVE LINQ TO SQL. It's so much easier to use than direct querying. But, there's one great problem: it doesn't work well on high loaded requests. I have some actions in my ASP.NET MVC project, that are called hundreds times every minute. I used to have LINQ to SQL there, but since the amount of requests is gigantic, LINQ TO SQL almost always returned "Row not found or changed" or "X of X updates failed". And it's understandable. For instance, I have to increase some value by one with every request. var stat = DB.Stats.First(); stat.Visits++; // .... DB.SubmitChanges(); But while ASP.NET was working on those //... instructions, the stats.Visits value stored in the table got changed. I found a solution, I created a stored procedure UPDATE Stats SET Visits=Visits+1 It works well. Unfortunately now I'm getting more and more moments like that. And it sucks to create stored procedures for all cases. So my question is, how to solve this problem? Are there any alternatives that can work here? I hear that Stackoverflow works with LINQ to SQL. And it's more loaded than my site.

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • Rate Limit Calls To Api Using Cache

    - by namtax
    Hi I am using coldfusion to call the last.fm api, using a cfc bundle sourced from here I am concerned about going over the request limit, which is 5 requests per originating IP address per second, averaged over a 5 minute period. The cfc bundle has a central component which calls all the other components, which are split up into sections like "artist", "track" etc...This central component "lastFmApi.cfc." is initiated in my application, and persisted for the lifespan of the application // Application.cfc example <cffunction name="onApplicationStart"> <cfset var apiKey = '[your api key here]' /> <cfset var apiSecret = '[your api secret here]' /> <cfset application.lastFm = CreateObject('component', 'org.FrankFusion.lastFm.lastFmApi').init(apiKey, apiSecret) /> </cffunction> Now if I want to call the api through a handler/controller, for example my artist handler...I can do this <cffunction name="artistPage" cache="5 mins"> <cfset qAlbums = application.lastFm.user.getArtist(url.artistName) /> </cffunction> I am a bit confused towards caching, but am caching each call to the api in this handler for 5 mins, but does this make any difference, because each time someone hits a new artist page wont this still count as a fresh hit against the api? Wondering how best to tackle this Thanks

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >