Search Results

Search found 6231 results on 250 pages for 'slow diver'.

Page 167/250 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • what happens to, the controls or the iframe or the div, which is

    - by user287745
    hidden:- does it get transferred to the user side? disabled:- does it get transferred to the user side? what i want is, a aspx page will be having many iframes to display different different pages, there will be many div tags to display css formatted information to understand the what i mean by many:- i have to transfer a complete website with 30 aspx pages into one single page! have simply combined every thing resulting in one extremely huge page my concern is the on local host it works loads fast, but when on online server accessed by numerous people for education purposes, the site ( ONE PAGE ) WILL SLOW DOWN terribly to overcome this i thought of using hidden and disable options. can any one help and possible suggest an improved way of achieving the above yes it sounds silly but this is the requirement thank you

    Read the article

  • Is this a correct Interlocked synchronization design?

    - by Dan Bryant
    I have a system that takes Samples. I have multiple client threads in the application that are interested in these Samples, but the actual process of taking a Sample can only occur in one context. It's fast enough that it's okay for it to block the calling process until Sampling is done, but slow enough that I don't want multiple threads piling up requests. I came up with this design (stripped down to minimal details): public class Sample { private static Sample _lastSample; private static int _isSampling; public static Sample TakeSample(AutomationManager automation) { //Only start sampling if not already sampling in some other context if (Interlocked.CompareExchange(ref _isSampling, 0, 1) == 0) { try { Sample sample = new Sample(); sample.PerformSampling(automation); _lastSample = sample; } finally { //We're done sampling _isSampling = 0; } } return _lastSample; } private void PerformSampling(AutomationManager automation) { //Lots of stuff going on that shouldn't be run in more than one context at the same time } } Is this safe for use in the scenario I described?

    Read the article

  • SQL indexing on varchar

    - by alex
    I have a table whose columns are varchar(50) and a float - I need to (very quickly) look get the float associated with a given string. Even with indexing, this is rather slow. I know, however, that each string is associated with an integer, which I know at the time of lookup, so that each string maps to a unique integer, but each integer does not map to a unique string. One might think of it as a tree structure. Is there anything to be gained by adding this integer to the table, indexing on it, and using a query like SELECT floatval FROM mytable WHERE phrase=givenstring AND assoc=givenint? This is Postgres, and if you couldn't tell, I have very little experience with databases.

    Read the article

  • Dynamically formatting a string

    - by TofuBeer
    Before I wander off and roll my own I was wondering if anyone knows of a way to do the following sort of thing... Currently I am using MessageFormat to create some strings. I now have the requirement that some of those strings will have a variable number of arguments. For example (current code): MessageFormat.format("{0} OR {1}", array[0], array[1]); Now I need something like: // s will have "1 OR 2 OR 3" String s = format(new int[] { 1, 2, 3 }); and: // s will have "1 OR 2 OR 3 OR 4" String s = format(new int[] { 1, 2, 3, 4 }); There are a couple ways I can think of creating the format string, such as having 1 String per number of arguments (there is a finite number of them so this is practical, but seems bad), or build the string dynamically (there are a lot of them so this could be slow). Any other suggestions?

    Read the article

  • Recursive function causing a stack overflow

    - by dbyrne
    I am trying to write a simple sieve function to calculate prime numbers in clojure. I've seen this question about writing an efficient sieve function, but I am not to that point yet. Right now I am just trying to write a very simple (and slow) sieve. Here is what I have come up with: (defn sieve [potentials primes] (if-let [p (first potentials)] (recur (filter #(not= (mod % p) 0) potentials) (conj primes p)) primes)) For small ranges it works fine, but causes a stack overflow for large ranges: user=> (sieve (range 2 30) []) [2 3 5 7 11 13 17 19 23 29] user=> (sieve (range 2 15000) []) java.lang.StackOverflowError (NO_SOURCE_FILE:0) I thought that by using recur this would be a non-stack-consuming looping construct? What am I missing?

    Read the article

  • How to programmatically fill a database

    - by Dwaine Bailey
    Hi There, I currently have an iPhone app that reads data from an external XML file at start-up, and then writes this data to the database (it only reads/writes data that the user's app has not seen before, though) My concern is that there is going to be a back catalogue of data of several years, and that the first time the user runs the app it will have to read this in and be atrociously slow. Our proposed solution is to include this data "pre-built" into the applications database, so that it doesn't have to load in the archival data on first-load - that is already in the app when they purchase it. My question is whether there is a way to automatically populate this data with data from, say, an XML file or something. The database is in SQLite. I would populate it by hand, but obviously this will take a very long time, so I was just wondering if anybody had a more...programmatic solution...

    Read the article

  • Fast word count function in Vim

    - by Greg Sexton
    I am trying to display a live word count in the vim statusline. I do this by setting my status line in my .vimrc and inserting a function into it. The idea of this function is to return the number of words in the current buffer. This number is then displayed on the status line. This should work nicely as the statusline is updated at just about every possible opportunity so the count will always remain 'live'. The problem is that the function I have currently defined is slow and so vim is obviously sluggish when it is used for all but the smallest files; due to this function being executed so frequently. In summary, does anyone have a clever trick for producing a function that is blazingly fast at calculating the number of words in the current buffer and returning the result?

    Read the article

  • Learning Algorithms and Data Structures Fundamentals

    - by valya
    Can you recommend me a book or (better!) a site with many hard problems and exercises about data structures? I'm already answering project Euler questions, but these questions are about interesting, but uncommon algorithms. I hardly used even a simple tree. Maybe there is a site with exercises like: hey, you need to calculate this: ... . Do it using a tree. Now do it using a zipper. Upload your C (Haskell, Lisp, even Pascal or Fortress go) solution. Oh, your solution is so slow! Self-education is very hard then you trying to learn very common, fundamental things. How can I help myself with them without attending to courses or whatever?

    Read the article

  • .NET Data Adapter Timeout SP Issue

    - by A-B
    We have a SQL Server stored procedure that runs fine in SQL Manager directly, does a rather large calculation but only takes 50-10 seconds max to run. However when we call this from the .NET app via a data adapter it times out. The timeout however happens before the timeout period should, we set it to 60 seconds and it still times out in about 20 seconds or less. I've Googled the issue and seen others note issues where a SP works fien directly but is slow via a data adpater call. Any ideas on how to resolve this?

    Read the article

  • How I do I make controls/elements move with inertia?

    - by Kris Erickson
    Modern UI's are starting to give their UI elments nice inertia when moving. Tabs slide in, page transitions, even some listboxes and scroll elments have nice inertia to them (the iphone for example). What is the best algorythm for this? It is more than just gravity as they speed up, and then slow down as they fall into place. I have tried various formulae's for speeding up to a maximum (terminal) velocity and then slowing down but nothing I have tried "feels" right. It always feels a little bit off. Is there a standard for this, or is it just a matter of playing with various numbers until it looks/feels right?

    Read the article

  • With Eclipse, in a PHP project with SVN, how to "tell" Eclipse to find autocompletion choice only in

    - by ranskalainen
    Hello, Using Eclipse on a PHP Project, I recently created a tag on my SVN. Since that day, let's say I'm working on a class in my trunk, when I ctrl+space in my code, Eclipse is getting really really slow (sometimes even freeze), and if I'm lucky, it will give me 2 responses for autocompletion : One referencing some method/class from the tag, one referencing method/class from the trunk. But right now, only the reference from the trunk would be useful for me. Is there a way to limit where Eclipse parse the code to give back autocompletion suggestion ? Thank you

    Read the article

  • iPhone SDK Nested For Loop performance

    - by Skeep
    Hi All, I have a NSArray of string id and a NSDictionary of NSDictionary objects. I am currently looping through the string id array to match the id value in the NSDictionary. There are around 200 NSDictionary objects and only 5 or so string ID. My current code is such: for (NSString *Str in aArr) { for (NSDictionary *a in apArr) { if ([a objectForKey:@"id"] == Str) { NSLog(@"Found!"); } } } The performance of the above code is really slow and I was wondering if there is a better way to do this?

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • Improve responsiveness of $(...).click on touch interface

    - by Bram W.
    I need access to a web-based on screen keyboard which will be used on a touch interface. This example looks nice and functional, however when I try it on an iPad, the responsiveness it very low IMHO. It's not comfortable to use and sometimes whole words are misspelled due to slow response. Is there a way to improve the experience on this type of on screen keyboard? This implementation uses the $('#id').click(...); function to process the events. Is there a better way to achieve the goal of typing on the screen? Are there better plugins out there? Note: The final application will run on different types of devices. For several reasons, native on screen keyboards are no option.

    Read the article

  • Efficient calculation of matrix cumulative standard deviation in r

    - by Abiel
    I recently posted this question on the r-help mailing list but got no answers, so I thought I would post it here as well and see if there were any suggestions. I am trying to calculate the cumulative standard deviation of a matrix. I want a function that accepts a matrix and returns a matrix of the same size where output cell (i,j) is set to the standard deviation of input column j between rows 1 and i. NAs should be ignored, unless cell (i,j) of the input matrix itself is NA, in which case cell (i,j) of the output matrix should also be NA. I could not find a built-in function, so I implemented the following code. Unfortunately, this uses a loop that ends up being somewhat slow for large matrices. Is there a faster built-in function or can someone suggest a better approach? cumsd <- function(mat) { retval <- mat*NA for (i in 2:nrow(mat)) retval[i,] <- sd(mat[1:i,], na.rm=T) retval[is.na(mat)] <- NA retval } Thanks.

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Tinting iPhone application screen red

    - by btschumy
    I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply). So the question is how to efficiently do this in real time on the iPhone? One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting. I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite. So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?

    Read the article

  • Find subset with K elements that are closest to eachother

    - by Nima
    Given an array of integers size N, how can you efficiently find a subset of size K with elements that are closest to each other? Let the closeness for a subset (x1,x2,x3,..xk) be defined as: 2 <= N <= 10^5 2 <= K <= N constraints: Array may contain duplicates and is not guaranteed to be sorted. My brute force solution is very slow for large N, and it doesn't check if there's more than 1 solution: N = input() K = input() assert 2 <= N <= 10**5 assert 2 <= K <= N a = [] for i in xrange(0, N): a.append(input()) a.sort() minimum = sys.maxint startindex = 0 for i in xrange(0,N-K+1): last = i + K tmp = 0 for j in xrange(i, last): for l in xrange(j+1, last): tmp += abs(a[j]-a[l]) if(tmp > minimum): break if(tmp < minimum): minimum = tmp startindex = i #end index = startindex + K? Examples: N = 7 K = 3 array = [10,100,300,200,1000,20,30] result = [10,20,30] N = 10 K = 4 array = [1,2,3,4,10,20,30,40,100,200] result = [1,2,3,4]

    Read the article

  • SqlServer2008 + expensive union

    - by Tim Mahy
    Hi al, we have 5 tables over which we should query with user search input throughout a stored procedure. We do a union of the similar data inside a view. Because of this the view can not be materialized. We are not able to change these 5 tables drastically (like creating a 6th table that contains the similar data of the 5 tables and reference that new one from the 5 tables). The query is rather expensive / slow what are our other options? It's allowed to think outside the box. Unfortunately I cannot give more information like the table/view/SP definition because of customer confidentiality... greetings, Tim

    Read the article

  • measuring performance - using real clicks vs "ab" command

    - by shanyu
    I have a web site in closed beta, developed in Django, runs with Mysql on Debian. In the last few days, the main page has been showing a slowdown. For every ten clicks, one or two receives extremely slow response (10 secs or more), others are as fast as they used to be. When I was searching for the problem, I ran into this issue that I couldn't grasp: top command shows that when I request the main page, mysql shoots up to 90% - 100% cpu usage. I get the page just as the cpu use gets back to normal. So, I thought, it is db. Then I called ab with parameters -n 1000 -c 5, I got decent performance, about 100 pages per second, just as it was before the slowdown. I would imagine a worse performance as 10-20% of requests take 10 secs to load. Is this conflict between ab and "real" clicks normal, or am I using ab in a wrong configuration?

    Read the article

  • Index on column with only 2 distinct values

    - by Will
    I am wondering about the performance of this index: I have an "Invalid" varchar(1) column that has 2 values: NULL or 'Y' I have an index on (invalid), as well as (invalid, last_validated) Last_validated is a datetime (this is used for a unrelated SELECT query) I am flagging a small amount of items (1-5%) of rows in the table with this as 'to be deleted'. This is so when i DELETE FROM items WHERE invalid='Y' it does not perform a full table scan for the invalid items. A problem seems to be, the actual DELETE is quite slow now, possibly because all the indexes are being removed as they are deleted. Would a bitmap index provide better performance for this? or perhaps no index at all?

    Read the article

  • MicroChip Sample Code setting Current to a CMPDAC, DAC threshold which expect an voltage

    - by jason hong
    Sorry, the MicroChip Forum is very slow,I prefer to use overflow site to ask questions. dsPIC33FJ06GS101/X02 and dsPIC33FJ16GSX02/X04 device Sample Code // configure comparator2 CMPCON2bits.CMPON = 1; // enable comparator CMPCON2bits.INSEL = 1; // select CMP2B input pin (RB0) CMPCON2bits.RANGE = 1; // select high range, max DAC value = Avdd/2 //CMPDACx: COMPARATOR DAC CONTROL REGISTER //CMREF<9:0>: Comparator Reference Voltage Select bits CMPDAC2 = CURR_HWLIM; // DAC threshold #define CURR_HWLIM 1023 // 1023 // 10.15 * 101A MicroChip Sample code setting CURR_HWLIM which 1023A to COMPDAC2 which expects voltage I think that's mistake.

    Read the article

  • What's a reasonable number of rows and tables to be able to join in MySQL?

    - by Philip Brocoum
    I have one table that maps locations to postal codes. For example, New York State has about 2000 postal codes. I have another table that maps mail to the postal codes it was sent to, but this table has about 5 million rows. I want to find all the mail that was sent to New York State, which seems simple enough, but the query is unbelievably slow. I haven't been able to even wait long enough for it to finish. Is the problem that there are 5 million rows? I can't help but think that 5 million shouldn't be such a large number for a computer these days... Oh, and everything is indexed. Is SQL just not designed to handle such large joins?

    Read the article

  • What is the fastest way to display an image in QT on X11 without OpenGL?

    - by msh
    I need to display a raw image in a QT widget. I'm running X11 on a framebuffer, so OpenGL is not available. Both the image and the framebuffer are in the same format - RGB565, but I can change it to any other format if needed. I don't need blending or scaling. I just need to display pixels as is. I'm using QPainter::drawImage, but it converts QImage to QPixmap and this conversion seems to be very slow. Also it is backed by Xrender and I think there is unnecessary overhead required to support blending in Xrender which I don't really need Is there any better way? If it is not available in QT, I can use Xlib or any other library or protocol. I can modify the driver, X server or anything else.

    Read the article

  • Limit the number of cron jobs runnging a PHP script

    - by ed209
    I have a daily cron job that grabs 10 users at a time and does a bunch of stuff with them. There are 1000's of users that are part of that process every day. I don't want the cron jobs to overlap as they are heavy on server load. They call stuff from various APIs so I have timeouts and slow data fetching to contend with. I've tried using a flag to mark a cron job as running, but it's not reliable enough as some PHP scripts may time out or fail in various ways. Is there a good way to stop a cron job from calling the PHP script multiple times, or controlling the number of times it is called, so there are only 3 instances for example? I'd prefer a solution in the PHP if possible. Any ideas? At the moment I'm storing the flag as a value in a database, is using a lock type file any better as in here http://stackoverflow.com/questions/851872/does-a-cron-job-kill-last-cron-execution ?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >