Search Results

Search found 6612 results on 265 pages for 'seconds'.

Page 219/265 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Dynamically change ViewPagerIndicator titles

    - by msal
    My current project uses some ListFragments to show rows of data. The rows get updated dynamically every some seconds. The amount of rows varies with every update and in every ListFragment. I would like to show the amount of rows to the user, and think that the perfect place for that would be next to the Fragment's title in the ViewPagerIndicator. I provided a sample image for better comprehension: Sadly I am pretty clueless how to achieve this. I tried the following: public class PagerAdapter extends FragmentPagerAdapter { private int numOne = 0; private int numTwo = 0; // ... @Override public CharSequence getPageTitle(int position) { switch (position) { case 0: return "List 1 (" + numOne + ")"; case 1: return "List 2 (" + numTwo + ")"; default: return ""; } public void setNumOne(int num) { this.numOne = num; } public void setNumTwo(int num) { this.numTwo = num; } } When I now call the setNumXXX() method, nothing happens, until I move between fragments, what seems to trigger the getPageTitle() to fire. My question is: How can I force an update of the title(s), everytime when the num value changes?

    Read the article

  • Asp.Net Login Control very slow initial connection to Non-Trusted AD Domain

    - by Eric Brown - Cal
    ASP.NET Login control is very slow making the initial connection to AD when authenticating to a different domain than the domain the web server is a member of. Problem occurs for the IIS server and when using with the Visual Studio's built in web server. It takes about 30 seconds the first time when attempting to use the control to connect against another domain. There is no trust relationship bewteen the web server's domain and the other domains (attempted connecting to several different domains). Subsequent connections execute quickly until the connection times out. Using Systernals Process Monitor to troubleshoot, there are two OpenQuery operations right before the delay to "C:\WINDOWS\asembly\GAC_MSIL\System.DirectoryServices\2.0.0.0_b03f5f7f11d50a3a\Netapi32.dll with a result NAME NOT FOUND" and right after the 30 second delay the TCP Send and TCP Recieves indicate communication begins with the AD server. Things we have tried: Impersonating an administrator on the web server in the web.config; Granting permissions to the CryptoKeys to the NetworkService and ASPNET; Specifying by IP instead of DNS name; Multiple variations of specifying the name and ldap server with domains and OU's; Local host entries; Looked for ports being blocked (SYN_SENT) with netstat -an. Nslookup resolves all the domains and systems involved correectly. TraceRt shows the Correct routes Any Idea or hints are greately appreicated.

    Read the article

  • OpenGL particle system

    - by allan
    I'm really new with OpenGL, so bear with me. I'm trying to simulate a particle system using OpenGl but I can't get it to work, this is what I have so far: #include <GL/glut.h> int main (int argc, char **argv){ // data allocation, various non opengl stuff ............ glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE ); glutInitWindowPosition(100,100); glutInitWindowSize(size, size); glPointSize (4); glutCreateWindow("test gl"); ............ // initial state, not opengl ............ glViewport(0,0,size,size); glutDisplayFunc(display); glutIdleFunc(compute); glutMainLoop(); } void compute (void) { // change state not opengl glutPostRedisplay(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_POINTS); for(i = 0; i<nparticles; i++) { // two types of particles if (TYPE(particle[i]) == 1) glColor3f(1,0,0); else glColor3f(0,0,1); glVertex2f(X(particle[i]),Y(particle[i])); } glEnd(); glFlush(); glutSwapBuffers(); } I get a black window after a couple of seconds (the window has just the title bar before that). Where do I go wrong? Any help would be very much appreciated. Thanks. LE: the x and y coordinates of each particle are within the interval (0,size)

    Read the article

  • Where can I find a professional image gallery built on a javascript framework?

    - by user278457
    I'm looking to find a galleria replacement, hopefully using jQuery but other javascript frameworks such as prototype or mootools are fine too. I used galleria a while back, and I need a similar product now. Unfortunately, the devkick.com domain seems to have disappeared in the meantime and I'm wary of using products that aren't actively maintained. I'm willing to pay up to $50 per site for licensing costs, if the product meets my needs. I'm specifically looking for a gallery with the following features: Every image in the gallery preloads asap, not as the user clicks "next" Minimalist default css to keep my subsequent styling headaches down, preferably a "darkroom" style by default, much as galleria looks Each element that constructs the image gallery should be simple and logical to reference with CSS As easy to install as adding a css class to a single unordered list No dependencies other than the core jQuery/other library, including "easing" and other effects must be optional Works on browsers back to IE6, Firefox 3, Safari (and iPhone), Chrome, Opera Has a javascript API that lets me trigger callback functions on common events such as "user clicks next" or "image loads" degrades gracefully without javascript, either displays images as a list, or just displays the first image in the list bonus: The gallery can display other content, such as video or external sites, like the modal boxes at shadowbox-js.com well documented minimal bandwidth requirement - .js file should be ~10kb minified bonus: The gallery source is hosted on a reliable CDN like google's bonus: Thumbnails for images do not appear until the main image has loaded bonus: includes ability to set parameters with JSON to change common behaviours, such as slide/fade transitions or automatic image switch every X seconds

    Read the article

  • Displaying progressbar using threading in win 32 applicaition!

    - by kiddo
    In my application I have a simple module were I will read files for some process that will take few seconds..so I thought of displaying a progress bar(using worker thread) while the files are in progress.I have created a thread (code shown below) and also I designed a dialog window with progress control.I used the function MyThreadFunction below to display the progressbar but it just shows only one time and disappears,I am not sure how to make it work.I tried my best inspite of the fact that I am new to threading. reading files void ReadMyFiles() { for(int i = 0; i < fileCount ; fileCount++) { CWinThread* myThread = AfxBeginThread((AFX_THREADPROC)MyThreadFunction,NULL); tempState = *(checkState + index); if(tempCheckState == NOCHECKBOX) { //my operations } else//CHECKED or UNCHECKED { //myoperation } myThread->PostThreadMessage(WM_QUIT,NULL,NULL); } } thread functions UINT MyThreadFunction(LPARAM lparam) { HWND dialogWnd = CreateWindowEx(0,WC_DIALOG,L"Proccessing...",WS_OVERLAPPEDWINDOW|WS_VISIBLE, 600,300,280,120,NULL,NULL,NULL,NULL); HWND pBarWnd = CreateWindowEx(NULL,PROGRESS_CLASS,NULL,WS_CHILD|WS_VISIBLE|PBS_MARQUEE,40,20,200,20, dialogWnd,(HMENU)IDD_PROGRESS,NULL,NULL); MSG msg; PostMessage( pBarWnd, PBM_SETRANGE, 0, MAKELPARAM( 0, 100 ) ); PostMessage(pBarWnd,PBM_SETPOS,0,0); while(PeekMessage(&msg,NULL,NULL,NULL,PM_NOREMOVE)) { if(msg.message == WM_QUIT) { DestroyWindow(dialogWnd); return 1; } AfxGetThread()->PumpMessage(); Sleep(40); } return 1; }

    Read the article

  • Oracle output: cursor, file, or very long string?

    - by Klay
    First, the setup: I have a table in an Oracle10g database with spatial columns. I need to be able to pass in a spatial reference so that I can reproject the geometry into an arbitrary coordinate system. Ultimately, I need to compress the results of this projection to a zip file and make it available for download through a Silverlight project. I would really appreciate ideas as to the best way to accomplish this. In the examples below, the SRID is the Spatial reference ID integer used to convert the geometric points into a new coordinate system. In particular, I can see a couple of possibilities. There are many more, but this is an idea of how I'm thinking: a) Pass SRID to a dynamic view -- perform projection, output a cursor -- send cursor to UTL_COMPRESS -- write output to a file (somehow) -- send URL to Silverlight app b) Use SRID to call Oracle function from Silverlight app -- perform projection, output a string -- build strings into a file -- compress file using SharpZipLib library in .NET -- send bytestream back to Silverlight app I've done the first two steps of b), and the conversion of 100 points took about 7 seconds, which is unacceptably slow. I'm hoping it would be faster doing the processing totally in Oracle. If anyone can see potential problems with either way of doing this, or can suggest a better way, it would be very helpful. Thanks!

    Read the article

  • Ajax long polling (comet) + php on lighttpd v1.4.22 multiple instances problem.

    - by fibonacci
    Hi, I am new to this site, so I really hope I will provide all the necessary information regarding my question. I've been trying to create a "new message arrived notification" using long polling. Currently I am initiating the polling request by window.onLoad event of each page in my site. On the server side I have an infinite loop: while(1){ if(NewMessageArrived($current_user))break; sleep(10); } echo $newMessageCount; On the client side I have the following (simplified) ajax functions: poll_new_messages(){ xmlhttp=GetXmlHttpObject(); //... xmlhttp.onreadystatechange=got_new_message_count; //... xmlhttp.send(); } got_new_message_count(){ if (xmlhttp.readyState==4){ updateMessageCount(xmlhttp.responseText); //... poll_new_messages(); } } The problem is that with each page load, the above loop starts again. The result is multiple infinite loops for each user that eventually make my server hang. *The NewMessageArived() function queries MySQL DB for new unread messages. *At the beginning of the php script I run start_session() in order to obtain the $current_user value. I am currently the only user of this site so it is easy for me to debug this behavior by writing time() to a file inside this loop. What I see is that the file is being written more often than once in 10 seconds, but it starts only when I go from page to page. Please let me know if any additional information might help. Thank you.

    Read the article

  • Detecting whether user stayed after prompting onBeforeUnload

    - by Daniel Magliola
    In a web app I'm working on, I'm capturing onBeforeUnload to ask the user whether he really wants to exit. Now, if he decides to stay, there are a number of things I'd like to do. What I'm trying to figure out is that he actually chose to stay. I can of course declare a SetTimeout for "x" seconds, and if that fires, then it would mean the user is still there (because we didn't get unloaded). The problem is that the user can take any time to decide whether to stay or not... I was first hoping that while the dialog was showing, SetTimeout calls would not fire, so I could set a timeout for a short time and it'd only fire if the user chose to stay. However, timeouts do fire while the dialog is shown, so that doesn't work. Another idea I tried is capturing mouseMoves on the window/document. While the dialog is shown, mouseMoves indeed don't fire, except for one weird exception that really applies to my case, so that won't work either. Can anyone think of other way to do this? Thanks! (In case you're curious, the reason capturing mouseMove doesn't work is that I have an IFrame in my page, containing a site from another domain. If at the time of unloading the page, the focus is within the IFrame, while the dialog shows, then I get the MouseMove event firing ONCE when the mouse moves from inside the IFrame to the outside (at least in Firefox). That's probably a bug, but still, it's very likely that'll happen in our case, so I can't use this method).

    Read the article

  • Strange XCode debugger behavior with UITableView datasource

    - by Tarfa
    Hey guys. I've got a perplexing issue. In my subclassed UITableViewController my datasource methods lose their tableview reference depending on lines of code I put inside the method. For example, in this code block: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 3; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return 5; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { id i = tableView; static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell... return cell; } the "id i = tableView;" causes the tableview to become nil (0x0) -- and it causes it to be nil before I ever start stepping into the method. If I insert an assignment statement above the "id i = tableview;" statement: CGFloat x = 5.0; id i = tableView; then tableview retains its pointer (i.e. is not nil) if I place the breakpoint after the "id i = tableView;" line. In other words, the breakpoint must be set after the "id i = tableView"; assignment in order for tableView to retain its pointer. If the breakpoint is set before the assignment is made and I just hang at that breakpoint for a bit then after a couple of seconds the console logs this error message: Assertion failed: (cls), function getName, file /SourceCache/objc4_Sim/objc4-427.5/runtime/objc-runtime-new.mm, line 3990. Although the code works when I don't step through the method, I need my debugger to work! It makes programming kind of challenging when your debugging tools become your enemy. Anyone know what the cause and solution are? Thanks.

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • Should I expect Comet to be this slow?

    - by Chad Johnson
    I have the following in a Rails controller: def poll records = [] start_time = Time.now.to_i while records.length == 0 do records = Something.uncached{Something.find(:all, :conditions => { :some_condition => false})} if records.length > 0 break end sleep 1 if Time.now.to_i - start_time >= 20 break end end responseData = [] records.each do |record| responseData << { 'something' => record.some_value } # Flag message as received. record.some_condition = true record.save end render :text => responseData.to_json end and then I have Javascript performing an AJAX request. The request sits there for 20 seconds or until the controller method finds a record in the database, waiting. That works. function poll() { $.ajax({ url: '/my_controller/poll', type: 'GET', dataType: 'json', cache: false, data: 'time=' + new Date().getTime(), success: function(response) { // show response here }, complete: function() { poll(); }, error: function() { alert('error'); poll(); } }); } When I have 5 - 10 tabs open in my browser, my web application becomes super slow. Is this to be expected? Or is there some obvious improvement(s) I can make?

    Read the article

  • How to show the progressbar using threading functionality in win32?

    - by kiddo
    In my application I have a simple module were I will read files for some process that will take few seconds..so I thought of displaying a progress bar(using worker thread) while the files are in progress.I have created a thread (code shown below) and also I designed a dialog window with progress control.I used the function MyThreadFunction below to display the progressbar but it just shows only one time and disappears,I am not sure how to make it work.I tried my best inspite of the fact that I am new to threading.Please help me with this friends. reading files void ReadMyFiles() { for(int i = 0; i < fileCount ; fileCount++) { CWinThread* myThread = AfxBeginThread((AFX_THREADPROC)MyThreadFunction,NULL); tempState = *(checkState + index); if(tempCheckState == NOCHECKBOX) { //my operations } else//CHECKED or UNCHECKED { //myoperation } myThread->PostThreadMessage(WM_QUIT,NULL,NULL); } } thread functions UINT MyThreadFunction(LPARAM lparam) { HWND dialogWnd = CreateWindowEx(0,WC_DIALOG,L"Proccessing...",WS_OVERLAPPEDWINDOW|WS_VISIBLE, 600,300,280,120,NULL,NULL,NULL,NULL); HWND pBarWnd = CreateWindowEx(NULL,PROGRESS_CLASS,NULL,WS_CHILD|WS_VISIBLE|PBS_MARQUEE,40,20,200,20, dialogWnd,(HMENU)IDD_PROGRESS,NULL,NULL); MSG msg; PostMessage( pBarWnd, PBM_SETRANGE, 0, MAKELPARAM( 0, 100 ) ); PostMessage(pBarWnd,PBM_SETPOS,0,0); while(PeekMessage(&msg,NULL,NULL,NULL,PM_NOREMOVE)) { if(msg.message == WM_QUIT) { DestroyWindow(dialogWnd); return 1; } AfxGetThread()->PumpMessage(); Sleep(40); } return 1; }

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Threading Practice with Polling.

    - by Stacey
    I have a C# application that has to constantly read from a program; sometimes there is a chance it will not find what it needs, which will throw an exception. This is a limitation of the program it has to read from. This frequently causes the program to lock up as it tries to poll. So I solved it by spawning the 'polling' off into a separate thread. However watching the debugger, the thread is created and destroyed each time. I am uncertain if this is typical or not; but my question is, is this good practice, or am I using the threading for the wrong purpose? ProgramReader { static Thread oThread; public static void Read( Program program ) { // check to see if the program exists if ( false ) oThread = new ThreadStart(program.Poll); if(oThread != null || !oThread.IsAlive ) oThread.Start(); } } This is my general pseudocode. It runs every 10 seconds or so. Is this a huge hit to performance? The operation it performs is relatively small and lightweight; just repetitive.

    Read the article

  • ASP.NET retrieve Average CPU Usage

    - by Sam
    Last night I did a load test on a site. I found that one of my shared caches is a bottleneck. I'm using a ReaderWriterLockSlim to control the updates of the data. Unfortunately at one point there are ~200 requests trying to update the data at approximately the same time. This also coincided with CPU usage spikes. The data being updated is in the ASP.NET Cache. What I'd like to do is if the CPU usage is around 75%, I'd like to just skip the cache and hit the database on another machine. My problem is that I don't know how expensive it is to create a new performance counter to check the cpu usage. Also, if I would probably like the average cpu usage over the last 2 or 3 seconds. However, I can't sit there and calculate the cpu time as that would take longer than it's taking to update the cache currently. Is there an easy way to get the average CPU usage? Are there any drawbacks to this? I'm also considering totaling the wait count for the lock and then at a certain threshold switch over to the database. The concern I had with this approach would be that changing hardware might allow more locks with less of a strain on the system. And also finding the right balance for the threshold would be cumbersome and it doesn't take into account any other load on the machine. But it's a simple approach, and simple is 99% of the time better.

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Highlight image borders with Ajax request.

    - by Tek
    First, some visualization of the code. I have the following images that are dynamically generated from jquery. They're made upon user request: <img id="i13133" src="someimage.jpg" /> <img id="i13232" src="someimage1.jpg" /> <img id="i14432" src="someimage2.jpg" /> <img id="i16432" src="someimage3.jpg" /> <img id="i18422" src="someimage4.jpg" /> I have an AJAX loop that repeats every 15 seconds using jQuery and it contains the following code: Note: The if statement is inside the Ajax loop. Where imgId is the requested ID from the Ajax call. //Passes the IDs retrieved from Ajax, if the IDs exist on the page the animation is triggered. if ( $('#i' + imgId ).length ) { var pickimage = '#i' + imgId; var stop = false; function highlight(pickimage) { $(pickimage).animate({color: "yellow"}, 1000, function () { $(pickimage ).animate({color: "black"}, 1000, function () { if (!stop) highlight(pickimage); }); }); } // Start the loop highlight(pickimage); } It works great, even with multiple images. But I had originally used this with one image. The problem is I need an interrupt. Something like: $(img).click(function () { stop = true; }); There's two problems: 1.)This obviously stops all animations. I can't wrap my head around how I could write something that only stops the animation of the image that's clicked. 2.)The Ajax retrieves IDs, sometimes those IDs appear more than once every few minutes, which means it would repeat the animations on top of each other if the image exists. I could use some help figuring out how to detect if an animation is already running on an image, and do nothing if the animation is already triggered on an image.

    Read the article

  • Decoding a jpg in the background in WP7

    - by Shahar Prish
    I have a bunch of apps in the marketplace, and so far I have been able, by changing my functionality or going the extra mile, to work around the issue of being unable to decode a jpg in the background into a WriteableBitmap. I am finding a situation where I can't think of good ways to "work around" the issue. I need to decode the image I get from MediaLibrary, reduce it's resolution to something managable (800x800), rotate it potentially and save to local storage. By far, the thing that takes the most time (80%) is decoding the bitmap to 800x800 - it takes between 700ms to 1000 ms. A user may add 7-10 images when starting, which translates to ~10 seconds of waiting for the images being added. I tried doing this lazily, but at some point you need to pay the piper and the app essentially stutters for ~1000ms at that point and the experience is not great. Is there an alternative I am missing for loading the image in the background somehow? (Note on why CreateOptions.BackgroundCreation is no good for me: It loads the image into a BitmapImage which is great if you want to just use it, but not so great for what I need to do which is create a copy in Isolated Storage).

    Read the article

  • iPhone Development - Location Accuracy

    - by Mustafa
    I'm using following conditions in-order to make sure that the location i get has adequate accuracy, In my case kCLLocationAccuracyBest. But the problem is that i still get inaccurate location. // Filter out nil locations if(!newLocation) return; // Make sure that the location returned has the desired accuracy if(newLocation.horizontalAccuracy < manager.desiredAccuracy) return; // Filter out points that are out of order if([newLocation.timestamp timeIntervalSinceDate:oldLocation.timestamp] < 0) return; // Filter out points created before the manager was initialized NSTimeInterval secondsSinceManagerStarted = [newLocation.timestamp timeIntervalSinceDate:locationManagerStartDate]; if(secondsSinceManagerStarted < 0) return; // Also, make sure that the cached location was not returned by the CLLocationManager (it's current) - Check for 5 seconds difference if([newLocation.timestamp timeIntervalSinceReferenceDate] < [[NSDate date] timeIntervalSinceReferenceDate] - 5) return; When i activate the GPS, i get inaccurate results before i actually get an accurate result. What methods do you use to get accurate/precise location information?

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

  • Laggy interface with NSSearchField hooked up to an NSArrayController via bindings

    - by Simone Manganelli
    So I've got an NSSearchField hooked up directly to an NSArrayController via bindings, attached to the filterPredicate, so that without any code, the user can just type in the NSSearchField and filter the list of objects in the NSArrayController presented to him in the interface (an NSCollectionView, to be specific). The NSSearchField is hooked up to provide live searching, so that the NSCollectionView is filtered instantly as the user types, not after waiting for a short period for the user to stop typing. However, the problem is that this makes the interface really laggy. Typing is delayed significantly, by 0.5-1 seconds, and it seems like the NSCollectionView is trying to animate each and every rearrangement of items for each portion of the search string that the user enters. What I'd like is for the searching to be live, but the typing in the search field to be fluid, and the results to filter as fast as possible. Is there a way to do this via bindings, or will I need to put in some custom code that triggers the filterPredicate on a separate thread? (Note that I've got a custom sorting algorithm set up on the NSArrayController, and removing it seems to help a bit with the laggyness, but not completely.)

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

  • Would this method work to scale out SQL queries?

    - by David
    I have a database containing a single huge table. At the moment a query can take anything from 10 to 20 minutes and I need that to go down to 10 seconds. I have spent months trying different products like GridSQL. GridSQL works fine, but is using its own parser which does not have all the needed features. I have also optimized my database in various ways without getting the speedup I need. I have a theory on how one could scale out queries, meaning that I utilize several nodes to run a single query in parallel. The idea is to take an incoming SQL query and simply run it exactly like it is on all the nodes. When the results are returned to a coordinator node, the same query is run on the union of the resultsets. I realize that an aggregate function like average need to be rewritten into a count and sum to the nodes and that the coordinator divides the sum of the sums with the sum of the counts to get the average. What kinds of problems could not easily be solved using this model. I believe one issue would be the count distinct function. Edit: I am getting so many nice suggestions, but none have addressed the method.

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >