Search Results

Search found 6363 results on 255 pages for 'buford speed'.

Page 168/255 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • a loading screen for a c# wpf listbox

    - by evan
    I'm using a list box where there are on average about 500 thumbnails (items) that can be sorted and searched. Since I'm using default databinding and search descriptors (which I've heard are slow due to reflection) the list takes a noticeable pause of a few seconds loading, sorting, and searching (the list dynamically updates based on the contents of the search box so the first one or two letters typed are really slow). I don't think I can fully do away with reflection give the timeframe for the project, and speed isn't super essential, but I'd like some kind of graphical indication of the delay so it doesn't confuse the user. How could I do something like a website video loading screen where the listbox grays out and some kind of loading circle indicates it's processing until the list is ready? Or even just grayed out with the words "Loading..." for a few seconds could work. Any ideas? Thanks in advance for your help and suggestions!!!

    Read the article

  • using document.createDocumentFragment() child dom elements that contain jQuery.data

    - by taber
    I want to use document.createDocumentFragment() to create an optimized collection of HTML elements that contain ".data" coming from jQuery (v 1.4.2), but I'm kind of stuck on how to get the data to surface from the HTML elements. Here's my code: var genres_html = document.createDocumentFragment(); $(xmlData).find('genres').each(function(i, node) { var genre = document.createElement('a'); $(genre).addClass('button') .attr('href', 'javascript:void(0)') .html( $(node).find('genreName:first').text() ) .data('genreData', { id: $(node).find('genreID:first').text() }); genres_html.appendChild( genre.cloneNode(true) ); }); $('#list').html(genres_html); // error: $('#list a:first').data('genreData') is null alert($('#list a:first').data('genreData').id); What am I doing wrong here? I suspect it's probably something with .cloneNode() not carrying over the data when the element is appended to the documentFragment. Sometimes there are tons of rows so I want to keep things pretty optimized, speed-wise. Thanks!

    Read the article

  • Mapping functions of 2D numpy arrays

    - by perimosocordiae
    I have a function foo that takes a NxM numpy array as an argument and returns a scalar value. I have a AxNxM numpy array data, over which I'd like to map foo to give me a resultant numpy array of length A. Curently, I'm doing this: result = numpy.array([foo(x) for x in data]) It works, but it seems like I'm not taking advantage of the numpy magic (and speed). Is there a better way? I've looked at numpy.vectorize, and numpy.apply_along_axis, but neither works for a function of 2D arrays. EDIT: I'm doing boosted regression on 24x24 image patches, so my AxNxM is something like 1000x24x24. What I called foo above applies a Haar-like feature to a patch (so, not terribly computationally intensive).

    Read the article

  • Detect car acceleration in Android app?

    - by Stud33
    I want to incorporate some Accelerometer code into a Android application im working and want to see if this is possible. Basically what I need is for the code to detect car acceleration motion. I am not wanting to determine speed with the code but just distinguish if the phone is in a car and has accelerated motion (Hence the car is moving for the first time). I have gone through many different accelerometer applications to see if this motion produces a viable profile to go off of and it appears it does. Just looking for something that popups a "Hello World" dialog when it detects your in the car and its moving for the first time down the street. Any help would be appreciated and a simple yes or no its possible would work. I would also be interested in compensating anyone that is capable of doing this as well. I need this done like yesterday so please let me know. Thank You, JTW

    Read the article

  • Shortest-path algorithms which use a space-time tradeoff?

    - by Chris Mounce
    I need to find shortest paths in an unweighted, undirected graph. There are algorithms which can find a shortest path between two nodes, but this can take time. There are also algorithms for computing shortest paths for all pairs of nodes in the graph, but storing such a lookup table would take lots of disk space. What I'm wondering: Is there an algorithm which offers a space-time tradeoff that's somewhere between these two extremes? In other words, is there a way to speed up a shortest-path search, while using less disk space than would be occupied by an all-pairs shortest-path table? I know there are ways to efficiently store lookup tables for this problem, and I already have a couple of ideas for speeding up shortest-path searches using precomputed data. But I don't want to reinvent the wheel if there's already some established algorithm that solves this problem.

    Read the article

  • TIBRV: Remote vs Local RVD

    - by jsw
    When connected to a local RVD a sending application is shielded from network interruptions and the send message methods will only block for the time it takes for the message to reach the local RVD process. With remote RVD the sending application is no longer shielded from network interruptions and the send message methods will block for the time it takes to hop across the network to reach the remote RVD process. Is my understanding correct? The documentation is vague regarding remote daemons. I'm mostly concerned with how reliable and performant the send message will be from the perspective of a sending application. Introducing unnecessary blocking on the client side due to sending a message (especially a network hop) is a big no-no in this application. The speed at which the messages reach the consumer is not of the utmost importance. With this in mind is a remote RVD out of the question?

    Read the article

  • Create date efficiently

    - by Dave Jarvis
    On Pavel's page is the following function: CREATE OR REPLACE FUNCTION makedate(year int, dayofyear int) RETURNS date AS $$ SELECT (date '0001-01-01' + ($1 - 1) * interval '1 year' + ($2 - 1) * interval '1 day'):: date $$ LANGUAGE sql; I have the following code: makedate(y.year,1) What is the fastest way in PostgreSQL to create a date for January 1st of a given year? Pavel's function would lead me to believe it is: date '0001-01-01' + y.year * interval '1 year' + interval '1 day'; My thought would be more like: to_date( y.year||'-1-1', 'YYYY-MM-DD'); Am looking for the fastest way using PostgreSQL 8.4. (The query that uses the date function can select between 100,000 and 1 million records, so it needs speed.) Thank you!

    Read the article

  • FoxPro to C#: What best method between ODBC, OLE DB or another?

    - by Martin Labelle
    We need to read data from FoxPro 8 with C#. I'm gonna do some operations, and will push some of thoses data to an SQL Server database. We are not sure what's best method to read those data. I saw OLE DB and ODBC; what's best? REQUIRMENTS: The export program will run each night, but my company runs 24h a day. The DBF could sometimes be huge. We DON'T need to modify data. Our system, wich use FoxPro, is quite unstable: I need to find a way that ABSOLUTELY do not corrupt data, and, ideally, do not lock DBF files while reading. Speed is a minor requirement: it must be quick, but requirement #4 is most important.

    Read the article

  • VS 2008 SP1 text editor flickering over remote desktop connection

    - by AltairDusk
    I am connecting from a Windows 7 x64 machine to my dev machine running Windows XP SP3 using the built in remote desktop client. For most apps it works fine with no problems, for Visual Studio whenever I am typing the entire text editor keeps redrawing. I stumbled across this question: http://stackoverflow.com/questions/873849/vs-2008-sp1-over-remote-desktop-constant-repainting and I have tried all of the suggestions in it to no effect, including resetting all VS settings back to default then disabling the suggested settings. Has anyone found a reliable solution to this? I feel like I'm going insane with the screen constantly refreshing when I'm working from home. Some additional information: Remote Desktop is set to run at 1680x1050, 15bit color, Low-speed broadband for the experience setting with all but Visual styles and Persistent bitmap caching unchecked. Visual Studio 2008 Team System is running on the dev machine with Service Pack 1 and Power Commands installed.

    Read the article

  • Is it possible to cache JSP bytecode to avoid recompiles w/ Tomcat?

    - by Computer Guru
    Hi, Is there any way of caching the bytecode for JSP webapps/ In particular, using Tomcat as the Java servlet? I'm getting really fed up of Tomcat taking up all the CPU for 10 minutes while it compiles 4 different webapps every time I restart it.... I'm already using Jikes to "speed up" the compiles, but it's really killing me. The code does not change unless the webapp is upgraded (very rarely), and I cannot believe that there is no way to cache the compiled java bytecode instead of recompiling it each and every time. I'd appreciate any advice on the matter!

    Read the article

  • INNER JOIN vs LEFT JOIN performance in SQL Server

    - by Ekkapop
    I've created SQL command that use INNER JOIN for 9 tables, anyway this command take a very long time (more than five minutes). So my folk suggest me to change INNER JOIN to LEFT JOIN because the performance of LEFT JOIN is better, at first time its despite what I know. After I changed, the speed of query is significantly improve. I want to know why LEFT JOIN is faster than INNER JOIN? My SQL command look like below: SELECT * FROM A INNER JOIN B ON ... INNER JOIN C ON ... INNER JOIN D and so no

    Read the article

  • How would you adblock using Python?

    - by regomodo
    I'm slowly building a web browser in PyQt4 and like the speed i'm getting out of it. However, I want to combine easylist.txt with it. I believe adblock uses this to block http requests by the browser. How would you go about it using python/PyQt4? [edit1] Ok. I think i've setup Privoxy. I haven't setup any additional filters and it seems to work. The PyQt4 i've tried to use looks like this self.proxyIP = "127.0.0.1" self.proxyPORT= 8118 proxy = QNetworkProxy() proxy.setType(QNetworkProxy.HttpProxy) proxy.setHostName(self.proxyIP) proxy.setPort(self.proxyPORT) QNetworkProxy.setApplicationProxy(proxy) However, this does absolutely nothing and I cannot make sense of the docs and can not find any examples. [edit2] I've just noticed that i'f I change self.proxyIP to my actual local IP rather than 127.0.0.1 the page doesn't load. So something is happening.

    Read the article

  • How to prepare for a Java test?

    - by Zenzen
    Ok so in two days I have a test for my dream job (well it's an internship, but still!) and for quite some time now I've been reading the SCJP Guide book in order to prepare myself. BUT seeing how slow my reading speed is I guess finding a website with some "java essentials" or "standard java test questions" and going through it in those 2 days would be a better idea. Don't misunderstand me, I do have java experience and it's "only" an internship, but we all know how programming tests look like and how tricky they can be. So are there any online resources you could recommend me? I know 2 days is not much, but well I was informed bout the test 1-2days ago.

    Read the article

  • removing duplicate strings from a massive array in java efficiently?

    - by Preator Darmatheon
    I'm considering the best possible way to remove duplicates from an (Unsorted) array of strings - the array contains millions or tens of millions of stringz..The array is already prepopulated so the optimization goal is only on removing dups and not preventing dups from initially populating!! I was thinking along the lines of doing a sort and then binary search to get a log(n) search instead of n (linear) search. This would give me nlogn + n searches which althout is better than an unsorted (n^2) search = but this still seems slow. (Was also considering along the lines of hashing but not sure about the throughput) Please help! Looking for an efficient solution that addresses both speed and memory since there are millions of strings involved without using Collections API!

    Read the article

  • PHP website Optimization

    - by ana
    I have a high traffic website and I need make sure my site is fast enough to display my pages to everyone rapidly. I searched on Google many articles about speed and optimization and here's what I found: Cache the page Save it to the disk Caching the page in memory: This is very fast but if I need to change the content of my page I have to remove it from cache and then re-save the file on the disk. Save it to disk This is very easy to maintain but every time the page is accessed I have to read on the disk. Which method should I go with? Thanks

    Read the article

  • Help with parsing lxml

    - by Casey
    Hi To implement a college project, I need to handle XML files. For this I choose lxml after doing some research. However I can't seem to find some nice tutorial to help me get started. I can't choose most specifically which type of parsing I need to use. My XML files don't have that much data but speed is main concern, not memory. Can anyone point me to some tutorial that would help me or some book that I can lookup? I have already tried the tutorial on lxml site but that didn't help me much. Is there some small application I can look up to get a hang of parsing XML with lxml

    Read the article

  • CruiseControl / NANT <copy> Task

    - by Striker
    We have a website with all the media (css/images) stored in a media folder. The media folder and it's 95 subdirectories contain about 400 total files. We have a Cruiscontrol project that monitors just the media directory for changes and when triggered copies those files to our integration server. Unfortunately, our integration server is at a remote location and so even when copying 2-3 files the NANT task is taking 4+ minutes. I believe the combination of the sheer number or directories/files and our network latency is causing the NANT task to run slow. I believe it is comparing the modified dates of both the local and remote copy of every file. I really want to speed this up and my initial thought was instead of trying to copy the whole media folder, can I get the list of file modifications from CruiseControl and specifically copy those files instead, saving the NANT task the work of having to compare them all for changes. Is there a way to do what I am asking or is there a better way to accomplish the same performance gains?

    Read the article

  • jquery noConflict not working in IE8 only

    - by slik
    I have a website using the prootype framework and I am looking to use a jquery plugin. Everything works just not in IE8. It works in ie7 which amazes me. Any idea what maybe wrong? jQuery.noConflict(); function OpenUp(sURL){ window.open(sURL,null,'height=560,width=820,status=yes,toolbar=yes,menubar=yes,location=yes,resizable=yes,scrollbars=yes',false); } jQuery(document).ready(function($) { $("head").append("<link>"); css = $("head").children(":last"); css.attr({ rel: "stylesheet", type: "text/css", href: "/my/docs/jquery.simplyscroll.css" }); $("#scroller").simplyScroll({ autoMode: 'loop', framerate: 1, speed: 1 }); }); I also tired the following: var $j = jQuery.noConflict(); var j = jQuery.noConflict(); everythig works just not in IE8 alone.

    Read the article

  • How to convert thousands of PDF files to a single Postscript file in a specified order

    - by tggagne
    I've discovered multiple options for convert a few to serveral PDFs into Postscript, but many are command-line programs with command-line limitations (this application lives on .NET). Our application generates tens-of-thousands of PDFs that we need to send to a printer, except BEFORE the Postscript is printed we need to edit the Postscript to insert print command instructions (duplex, tray-pulls, highlight color, etc.) I think a perfect solution might allow us to write the PDFs to a stream, and simultaneously allow us to read the output stream so we may edit the Postscript before writing it to a file. Of course, if I must create the file first containing all 10,000 PDFs and edit it in an additional pass, I'm OK with that, too. I should mention that speed is important. I need to print 10,000 at a time, but need to keep the printers busy 24-hours/day.

    Read the article

  • Clever ways to better test GPS code using only the iPhone simulator?

    - by Patty
    I've been playing around with the iPhone SDK, using MapKit and Core Location. What are some of the tricks you can use to better test things... while still on the simulator (long before I have to try it out on my iPhone). Is there a way to use NSTimer and regularly get 'pretend' values for location, heading, speed, etc? The simulator only giving 1 location... and no movement... really limits its 'testing' usefulness.

    Read the article

  • Will Delphi be there in future ?

    - by devdude
    Yes, there is a version 2009. I know Delphi has a big community since years (10 plus)and I believe you could create native windows exe before Visual Basic got to speed (with all its dll's nighmare). But is it future-proof ? Is there a need or market for a non-crossplatform native all-in-one executable ? Will Embarcardero ex Codegear ex Borland continue to push it ? Why is it so expensive ? Who (non company) can afford it, in order to learn it ?

    Read the article

  • Optimising local image loading/rendering on iPhone

    - by Tricky
    Hi, I'm looking to create an interface where the user can navigate through large volumes of images. Each image has a thumbnail of 128x128 that I wish to display and will be kind of similar to coverflow in operation. I have this all working in principle but am becoming stuck when navigating through content at speed. The interface begins to stutter and becoming jerky. I believe this is primarily because of disk i/o and the cost of rendering each image. Is there anyway this can be handed over to a seperate thread simply? Defaulting to a greyed out thumbnail until the image has loaded? How have Apple managed to achieve this in coverflow? Many thanks,

    Read the article

  • Console appliction with Multithreading on Single core.

    - by Harsha
    Hello all, I am reposting my question on Multithreading on Single core processor. Original question is: http://stackoverflow.com/questions/2856239/will-multi-threading-increase-the-speed-of-the-calculation-on-single-processor I have been asked a question, At any given time, only one thread is allowed to run on a single core. If so, why people use multithreading in a application. Lets say you are running console application and It is very much possible to write the application to run on the main thread. But still people go for multithreading.

    Read the article

  • Generate Info (wrapper) Class from stored procedure

    - by Adem
    Hello everybody I am in a crucial project and I am trying to speed up the development phase by using codesmith for generating the business class DAL and info class for the tables of my project. There are about 50 tables with relationships parent child many to many and for retrieving data I have to code several inner joins in stored procedures. I have to combine fields from many tables and this makes working with the info class difficult. Is there anyway to generate info class from stored procedures or to be more exact is there a way to parse the result set of the stored procedure and to generate the info class with properties for every column in that result set. Please if anyone can give me some advice and tell me how to achieve this. Best Regards

    Read the article

  • Problem With HTML5 Application Cache Whitelist - Won't Ignore Items

    - by Ryan Donnelly
    I'm trying to use HTML5 Application Cache to speed some things up on an iPhone webapp. It works great for storing images, css and JS, but the problem is that it also tries to store the HTML. I haven't been able to get it to ignore the html and stop storing it in the cache. From what I've read, I have to "whitelist" the files and directories that I want to load no matter what. I've tried listing the files I want cached explicitly, and I've tried adding a series of things under the "NETWORK:" heading. I've tried * / /* http://mysite.com http://mysite.com/ http://mysite.com/* None of them seem to work. Is there any way to ignore HTML files by MIME-Type or anything? Any advice would be appreciated. Ryan P.S. Of course, my site is not mysite.com..I just used that for simplicity.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >