Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 419/527 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • ExtJS Grid slow with 3000+ records

    - by Oliver Watkins
    I am using ExtJS Grid and its getting pretty slow with 3000+ records. Sorting takes about 4 seconds. Compared to other more Javascript tables, this is pretty slow. I am thinking maybe to use pagination in my table. However after reading the documentation, I am still a bit unsure about how pagination works in extjs. Does this pull data from the server each time u turn a page? I would prefer that wasn't the case. I would prefer the 3000 records are saved in the browser and then what is rendered is just a portion of those rows. Also I am using Extjs version 4.2.1. If I upgrade to version 5. will I get some performance improvements?

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Dealing with Windows line-endings in Python

    - by Adam Nelson
    I've got a 700MB XML file coming from a Windows provider. As one might expect, the line endings are '\r\n' (or ^M in vi). What is the most efficient way to deal with this situation aside from getting the supplier to send over '\n' :-) Use os.linesep Use rstrip() (requiring opening the file ... which seems crazy) Using Universal newline support is not standard on my Mac Snow Leopard - so isn't an option. I'm open to anything that requires Python 2.6+ but it needs to work on Snow Leopard and Ubuntu 9.10 with minimal external requirements. I don't mind a small performance penalty but I am looking for the standard best way to deal with this.

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • Emulating a transaction-safe SEQUENCE in MySQL

    - by Michael Pliskin
    We're using MySQL with InnoDB storage engine and transactions a lot, and we've run into a problem: we need a nice way to emulate Oracle's SEQUENCEs in MySQL. The requirements are: - concurrency support - transaction safety - max performance (meaning minimizing locks and deadlocks) We don't care if some of the values won't be used, i.e. gaps in sequence are ok. There is an easy way to archieve that by creating a separate InnoDB table with a counter, however this means it will take part in transaction and will introduce locks and waiting. I am thinking to try a MyISAM table with manual locks, any other ideas or best practices?

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • When does it make sense to use a map?

    - by kiwicptn
    I am trying to round up cases when it makes sense to use a map (set of key-value entries). So far I have two categories (see below). Assuming more exist, what are they? Please limit each answer to one unique category and put up an example. Property values (like a bean) age -> 30 sex -> male loc -> calgary Presence, with O(1) performance peter -> 1 john -> 1 paul -> 1

    Read the article

  • PostgreSQL: How to index all foreign keys?

    - by biggusjimmus
    I am working with a large PostgreSQL database, and I are trying to tune it to get more performance. Our queries and updates seem to be doing a lot of lookups using foreign keys. What I would like is a relatively simple way to add Indexes to all of our foreign keys without having to go through every table (~140) and doing it manually. In researching this, I've come to find that there is no way to have Postgres do this for you automatically (like MySQL does), but I would be happy to hear otherwise there, too.

    Read the article

  • [Symfony] Admin generator and i18n

    - by David
    I have read lots of questions about i18n, but I haven't found any about performance. I have a simple backend app listing the contents of an ads table. These ads have a category, that is translated in some languages (it's defined as i18n in the Doctrine schema). So, when I add a "table_method" in my generator.yml to include de Category table, it reduces the number of queries, but there are yet some of them referencing i18n translation tables. So, if I add the category Translation table to the query, it reduces even more the queries BUT it increases the processing time considerably. Why this time penalty? Just because of the translation table? And why isn't the filter using this method to avoid so many translation queries as well? I mean, if I want to filter by category, it is making one query per category to include the translation table. Why??

    Read the article

  • Which is the best API/Library to use when accessing a WebCam in .Net?

    - by Doctor Jones
    Which is the best API to use when accessing a WebCam in .Net? (I know they can be webcam specific, I am willing to buy a new webcam if it means better results). I want to write a desktop application that will take video from a webcam and store it in MPEG4 formats (DivX, Xvid, etc...). I would also like to access bitmap stills from the device so I can do image comparison between frames. I have tried various libraries, and none have really been a great fit (some have performance issues (very inconsistent framerates), some have image quality limitations, some just crash out for seemingly no reason. I want to get high quality video (as high as I can get) and a decent framerate. My webcam is more than up to the job and I was hoping that there would be a nice Managed .Net library around that would help my cause. Are webcam APIs all just incredibly bad?

    Read the article

  • OpenGL and layouts

    - by Hnefi
    I'm using OpenGL to render a game view in my android application. The game is turn based and I wish to add some buttons to the interface. I'd prefer to use standard Android widgets, structured in an XML-generated layout (or, if I have to, a hardcoded layout) and put the OpenGL view in its own window as part of that layout. So in regards to this, I have 3 questions: 1: Is such a thing possible? I've done a few half-hearted tries, but have had no luck so far. 2: Is such a thing advisable? Does it carry a significant performance penalty, for example, over using OpenGL-based homebrew widgetry? 3: Is it possible to pass particular arguments to instances created in XML layouts? For example, my current OpenGL view has three arguments in its constructor; is it somehow possible for me to invoke that particular constructor with particular parameters when it's part of a layout?

    Read the article

  • Hit Testing with CALayer using the alpha properties of the CALayer contents.

    - by Charliehorse
    I'm writing a game for Mac using Cocoa. I'm currently implementing hit testing and have founds that CALayer offers hit testing, but does not seem to implement the alpha properties. As I have at times many CALayers stacked on top of each other, I really need to find a way to determine what the user actually meant to click on. I'm thinking if I could somehow get an array that contains pointers to all of the CALayers that contain the click point, I could filter through them some how. However the only way I've got so far to create the array is: NSMutableArray* anArrayOfLayers = [NSMutableArray array]; for (CALayer* aLayer in mapLayer.sublayers) { if ([aLayer containsPoint:mouseCoord]) [anArrayOfLayers addObject:aLayer]; } Then sort the array by the CALayer's z-values then go through checking if the pixel at location is alpha or not. However, between the sort and the alpha check this seems to be an incredible performance hog. (How would you even check the alpha?) Is there any way to do this?

    Read the article

  • Cheap shared windows hosting

    - by Elangovan
    Hi I have recently purchased a domain from go daddy, now I am looking for a cheap windows hosting. Following are my requirements Shared Windows hosting Should be cheap in price Should have at least one SQL server Db and one mysql db. Should support atleast asp.net 3.5 and php Will be good if it has support for asp.net mvc (no problem even if it is not available also) Should be able to install third party blog sites. Bandwidth, total space and performance are not very important. Silverlight is also an added advantage (no problem even if it is not available also). There should be no advertisement or banner added by the hosting company in the site.

    Read the article

  • Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?

    - by Jay
    I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • Running multiple jvms for different applications in same machine

    - by Rajesh
    We are getting frequent out of memory errors in our dev. machines We are running webshpere, eclipse, soap UI and maven in it. Our server gets down due to this "out of memory errors" when we restart our applications in websphere 2/3 times, We already increased the virtual memory setting in wesphere to 1GB. So what i did was copied the jre we use in eclipse and maven folders so that each of these uses individual jvms. But the performance of websphere is same. 2/3 restarts and out of memory errors. Is there any may of making eclipse and maven use different jvms other than websphere's?

    Read the article

  • How does Core Data determine if an NSObjects data can be dropped?

    - by Kevin
    In the app I am working on now I was storing about 500 images in Core Data. I have since pulled those images out and store them in the file system now, but in the process I found that the app would crash on the device if I had an array of 500 objects with image data in them. An array with 500 object ids with the image data in those objects worked fine. The 500 objects without the image data also worked fine. I found that I got the best performance with both an array of object ids and image data stored on the filesystem instead of in core data. The conclusion I came to was that if I had an object in an array that told Core Data I was "using" that object and Core Data would hold on to the data. Is this correct?

    Read the article

  • ImageView source scaling done right. How?

    - by Aleksey Malevaniy
    Scope Image bitmap have to be shown as imageView.setImageBitmap(bitmap) and scaled to fit UI. This could be done via: bitmap = Bitmap.createScaledBitmap(bitmap, newWidth, newHeight, true); xml's ImageView attributes such as android:layout_width="newWidth" android:layout_height="newHeight" android:adjustViewBounds="true" android:scaleType="fitCenter" Problem Which way is better for performance? I prefer xml 'cause this is UI specific problem and I prefer to use xmls for UI definition. Also we set width/height values in dp, it means we have the same UI for different screens. Thanks!

    Read the article

  • Objective C: Why is this code leaking?

    - by Johnny Grass
    I'm trying to implement a method similar to what mytunescontroller uses to check if it has been added to the app's login items. This code compiles without warnings but if I run the leaks performance tool I get the following leaks: Leaked Object # Address Size Responsible Library Responsible Frame NSURL 7 < multiple > 448 LaunchServices LSSharedFileListItemGetFSRef NSCFString 6 < multiple > 432 LaunchServices LSSharedFileListItemGetFSRef Here is the responsible culprit: - (BOOL)isAppStartingOnLogin { LSSharedFileListRef loginListRef = LSSharedFileListCreate(NULL, kLSSharedFileListSessionLoginItems, NULL); if (loginListRef) { NSArray *loginItemsArray = (NSArray *)LSSharedFileListCopySnapshot(loginListRef, NULL); NSURL *itemURL; for (id itemRef in loginItemsArray) { if (LSSharedFileListItemResolve((LSSharedFileListItemRef)itemRef, 0, (CFURLRef *) &itemURL, NULL) == noErr) { if ([[itemURL path] hasPrefix:[[NSBundle mainBundle] bundlePath]]) { [loginItemsArray release]; CFRelease(loginListRef); return YES; } } } [loginItemsArray release]; CFRelease(loginListRef); } return NO; }

    Read the article

  • Php getting too many connections error from MySQL

    - by uzioriluzan
    Hello everyone, I am using MySQL and PHP with 2 application servers and 1 database server. With the increase of the number of users (around 1000 by now), I'm getting the following error : SQLSTATE[08004] [1040] Too many connections The parameter "max_connections" is set to "1000" in my.cnf and "mysql.max_persistent" is set to -1 in php.ini. There are at most 1500 apache processes running at a time since the "MaxClients" apache parameter is equal to 750 and we have 2 application servers. Should I raise the "max_connections" to 1500 as indicated in here? Or should I set "mysql.max_persistent" to 750 (we use PDO with persistent connections for performance reasons since the database server is not the same as the application servers)? Or should I try something else? Thanks in advance!

    Read the article

  • How to receive HTTP messages using Socket

    - by Poma
    I'm using Socket class for my web client. I can't use HttpWebRequest since it doesn't support socks proxies. So I have to parse headers and handle chunked encoding by myself. The most difficult thing is to determine length of content so I have to read it byte-by-byte. First I have to use ReadByte() to find last header ("\r\n\r\n" combination), then read chunk's size etc. But this approach has very poor performance. Can you suggest better solution? Maybe some open source examples or libraries that handle http request through sockets (not very big and complicated though, I'm a noob)

    Read the article

  • code-style: Is inline initialization of JS objects ok?

    - by michael
    I often find myself using inline initialization (see example below), especially in a switch statement when I don't know which case loop will hit. I find it easier to read than if statements. But is this good practice or will it incur side-effects or a performance hit? for (var i in array) { var o = o ? o : {}; // init object if it doesn't exist o[array[i]] = 1; // add key-values } Is there a good website to go to get coding style tips?

    Read the article

  • What FIX implementation do you recommend for use with .NET

    - by Ajaxx
    I am reviewing implementation choices for FIX when using .NET. A few obvious choices come to mind, but I want to know if there are other options, better choices or if we've made the same decision as a lot of you. QuickFIX - Stable, C++ implementation - so you've got unmanaged code to interop with. FIX4NET - C# implementation - seems to have some gaps in its implementation. DIY - Chime in here if you've made your own FIX engine Let me throw in some caveats here. I'm not looking for sub 100 microsecond processing. Performance is a requirement, but not so much that it's driving my decisions. A solid product that is stable, performs well and is flexible enough to deal with vendor specific dialects is the sweet spot. The more we can do in .NET the better.

    Read the article

  • Is it really wrong to version documents using CouchDB's behaviour?

    - by Tomas Sedovic
    This is one of those "I know I shouldn't do this but it's oh so convenient." questions. Sorry about that. I plan to use CouchDB for storing a bunch of documents and keeping their entire revision history. CouchDB does the versioning automatically, but it is strongly discouraged for programmer's use: "You cannot rely on document revisions for any other purpose than concurrency control." From what I've found on the CouchDB wiki, the versions can get deleted either during compaction or during replication. As far as I can tell, Compaction must always be triggered manually and Replication occurs only when there's more than one database server. The question is: if I won't run compaction and will use only single database instance for my documents, can I just use CouchDB's document versioning and expect it to work? What other problems I might run into? E.g. does not running compaction hurt the performance or consume significantly more disk space (than if I did handle the versioning manually)?

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >