Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 478/594 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Android: How to invalidate multiple parts of screen

    - by user342731
    I need to be able to selectively invalidate multiple (about 20) rectangles on the screen for performance reasons, so tried the following: Vector<Rect> myRects = new Vector<Rect>(); // ... add some Rects to myRects for (Rect r : myRects) { invalidate(r); } However this seems to invalidates a union of all the Rect's, forming one large rectangle which covers all of small ones I'm trying to invalidate. How can one invalidate multiple areas on the screen, and only those areas?

    Read the article

  • Possible to InvalidateVisual() on a given region instead of entire WPF control?

    - by Scott Bilas
    I have a complex WPF control that draws a lot of primitives in its OnRender (it's sort of like a map). When a small portion of it changes, I'd only like to re-issue render commands for the affected elements, instead of running the entire OnRender over. While I'm fine with my OnRender function's performance on a resize or whatever, it's not fast enough for mouse hover-based highlighting of primitives. Currently the only way I know how to force a screen update is to call InvalidateVisual(). No way to send in a dirty rect region to invalidate. Is the lowest granularity of WPF screen composition the UI element? Will I need to do my renders of primitives into an intermediate target and then have that use InvalidateVisual() to update to the screen?

    Read the article

  • Javascript - Determine if String Is In List

    - by Emtucifor
    In SQL we can see if a string is in a list like so: Column IN ('a', 'b', 'c') What's a good way to do this in javascript? I realize one can use the switch function: var str = 'a' var flag = false; switch (str) { case 'a': case 'b': case 'c': flag = true; default: } if (thisthing || thatthing || flag === true) { // do something } But this is a horrible mess. It's also clunky to do this: if (thisthing || thatthing || str === 'a' || str === 'b' || str = 'c') { // do something } And I'm not sure about the performance or clarity of this: if (thisthing || thatthing || {a:1, b:1, c:1}[str]) { // do something } Any ideas?

    Read the article

  • postgreSQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • asp.net membership provider api. usability. best-practice

    - by Andrew Florko
    Hello everybody, Membership/Role/Profile providers API appeared in early days of asp.net Nearly everytime I can't live with standard API & have to add some extra functionality (for sorting, retrieving e.t.c.). I also have to use different database structure often (with foreign key to some tables for example) or think about performance improvements. These considerations forced teams I took part in to build own providers but I can't stand to implement providers API (because we don't use 70% of standard functionality at least). Moreover, providers that were built for exact projects were rarely reused. I wonder if someone found swiss-knife early-days-API providers implementation that is usefull for any kind of project without refactoring... Or do you use your own implementations of early-days-API's Or may be you abandon standard architecture and use lightweight implementations ? Thank you in advance

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • P/Invoke or C++/CLI for wrapping a C library

    - by Ian G
    Have a moderate size (40-odd function) C API that needs to be called from a C# project. The functions logically break up to form a few classes that will be API presented to the rest of the project. Are there any objective reasons to prefer P/Invoke or C++/CLI for the interoperability underneath that API, in terms of robustness, maintainability, deployment, ...? The issues I could think of that might be, but aren't problematic are: C++/CLI will require an separate assembly, the P/Invoke classes can be in the main assembly. (We've already got multiple assemblies and there'll be the C dlls anyway so not a major issue). Performance doesn't seem differ noticeable between the two methods. Issues that I'm not sure about are: My feeling is C++/CLI will be easier to debug if there's inter-op problem, is this true? Language familiarity enough people know C# and C++ but knowledge of details of C++/CLI are rarer here. Anything else?

    Read the article

  • C#: When should I use TryParse?

    - by zxcvbnm
    I understand it doesn't throw an Exception and because of that it might be sightly faster, but also, you're most likely using it to convert input to data you can use, so I don't think it's used so often to make that much of difference in terms of performance. Anyway, the examples I saw are all along the lines of an if/else block with TryParse, the else returning an error message. And to me, that's basically the same thing as using a try/catch block with the catch returning an error message. So, am I missing something? Is there a situation when this is actually useful?

    Read the article

  • Will GTK's pango and cairo work well in Cocoa and MFC applications.

    - by Lothar
    I'm writing a GUI program and decided to go native on all platforms. But for all the stuff i need to draw myself i would like to use the same drawing routines because font and unicode handling is so difficult and complex. Do you see any negative points in useing Pango/Cairo. Well on MacOSX i havent succeded installing Pango/Cairo yet. Looks like a bad Omen. I would also like to hear about the performance penality. The first time i looked at Pango i thought, yes thats the reason why Software is still getting despite better hardware.

    Read the article

  • Which Android platform and API to target?

    - by Ben Mc
    I'm just about to launch my first Android app, and it runs on the Android 1.1 platform, API Level 2, but is this what I should officially sign and launch the app as? Does it affect performance at all or is it simply for Android to know which devices it works on? The only problem I see is that I can't specify <supports-screens> in the Manifest, which I would like to do, but it appears I'd have to launch at 1.6 at least for this to work. Would I be missing a huge number of phones by launching at 1.6 instead of 1.1? Thank you!

    Read the article

  • NHibernate parent-childs save redundant sql update executed

    - by Broken Pipe
    I'm trying to save (insert) parent object with a collection of child objects, all objects are new. I prefer to manually specify what to save\update and when so I do not use any cascade saves in mappings and flush sessions by myself. So basically I save this object graph like: session.Save(Parent) foreach (var child in Parent.Childs) { session.Save(child); } session.Flush() I expect this code to insert Parent row, then each child row, however NHibernate executes this SQL: INSERT INTO PARENT.... INSERT INTO CHILD .... UPDATE CHILD SET ParentId=@1 WHERE Id=@2 This update statement is absolutely unnecessary, ParentId was already set correctly in INSERT. How do I get rid of it? Performance is very important for me.

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • "Out of Memory" error in Lotus Notes automation from VBA

    - by PowerUser
    This VBA function sporadically fails with a Notes automation error "Run-Time Error '7' Out of Memory". Naturally, when I try to manually reproduce it, everything runs fine. Function ToGMT(ByVal X As Date) As Date Static NtSession As NotesSession If NtSession Is Nothing Then Set NtSession = New NotesSession NtSession.Initialize End If (do stuff) End function To put this in context, this VBA function is being called by an Access query, 3-4 times per record, with 20,000 records. For performance reasons, the NotesSession has been made static. Any ideas why it is sporadically giving an out-of-memory error? (Also, I'm initiating the NotesSession just so I can convert a datetime to GMT using Lotus's rules. If you know a better way, I'm listening).

    Read the article

  • ColdFusion static key/value list?

    - by richardtallent
    I have a database table that is a dictionary of defined terms -- key, value. I want to load the dictionary in the application scope from the database, and keep it there for performance (it doesn't change). I gather this is probably some sort of "struct," but I'm extremely new to ColdFusion (helping out another team). Then, I'd like to do some simple string replacement on some strings being output to the browser, looping through the defined terms and replacing the terms with some HTML to define the terms (a hover or a link, details to be worked out later, not important). Can anyone point me in the right direction of: How to define the stucture (if that is what I need for a key/value pair list) How to query at the application start-up and reuse the list properly The best way to do the string replacement

    Read the article

  • Caching instances in a jee web app

    - by SibzTer
    Hi, Consider the scenario of a typical webapp with JSFs on the front and ejb3, with Hibernate as JPA provider, talking to backend database such as mysql, etc. The main user actions are login and mostly CRUD operations (minus any D(elete) operations). And the App Server is GlassFish of course. Given this scenario, how and where all would one go about providing caching to improve performance? From what I have googled, I have seen that hibernate provides some sort of caching through different cache providers. Is there any sort of caching that can be provided for the jsf pages? How about session beans or entity beans on the ejb side of things? Also, I just read about memcached and was wondering if this was something to consider?

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Is there such a thing as too many tables?

    - by Stacey
    I've been searching stackoverflow for about an hour now and couldn't find any topics related, so I apologize if this is a duplicate question. My inquery is this. Is there a point at which there are too many tables in a database? Even if the structure is well organized, thought out, and perfectly facilitates the design intent? I have a database that is quickly approaching 40 tables - about 10 main ones, and over 30 ancillary tables (junction tables, 'enumeration' tables, etc). Am I just a bad developer - or should I be trying something different? It seems like so many to me, I'm really afraid at how it will impact the performance of the project. I have done a lot of condensing where possible, grouped similar things where possible, etc. The database is built in MS-SQL 2008.

    Read the article

  • Mysql query taking too much time

    - by aditya
    I have problem related to mysql database. i am linux webserver admin and i am facing a problem with a mysql query. The database is very small. I tried to track in logs and found that a query is taking minimum 5 sec to respond . The first page of site is coming from the database. Client are using cms. when the server gets some number of hits database server starts to give response very slowly and wait time increases from 5 sec to several seconds. I checked slow query logs { Query_time: 11.480138 Lock_time: 0.003837 Rows_sent: 921 Rows_examined: 3333 SET timestamp=1346656767; SELECT `Tender`.`id`, `Tender`.`department_id`, `Tender`.`title_english`, `Tender`.`content_english`, `Tender`.`title_hindi`, `Tender`.`content_hindi`, `Tender`.`file_name`, `Tender`.`start_publish`, `Tender`.`end_publish`, `Tender`.`publish`, `Tender`.`status`, `Tender`.`createdBy`, `Tender`.`created`, `Tender`.`modifyBy`, `Tender`.`modified` FROM `mcms_tenders` AS `Tender` WHERE `Tender`.`department_id` IN ( 31, 33, 32, 30 ); } Every line in the log is same only there is diff in Query time. Is there any way tweak the performance?

    Read the article

  • For what purpose does java have a float primitive type?

    - by Roman
    I heard plenty times different claims about float type in java. The most popular issues typicaly regard to converting float value to double and vice versa. I read (rather long time ago and not sure that it's actual now with new JVM) that float gives much worse performance then double. And it's also not recommended to use float in scientific applications which should have certain accuracy. I also remember that when I worked with AWT and Swing I had some problems with using float or double (like using Point2D.Float or Point2D.Double). So, I see only 2 advantages of float over double: it needs only 4 bytes while double needs 8 bytes JMM garantees that assignment operation is atomic with float variables while it's not atomic with double's. Are there any other cases where float is better then double? Do you use float's in your applications? It seems to me that the only valuable reason java has float is backward compatibility.

    Read the article

  • Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • Decoupling into DAL and BLL - my concerns.

    - by novice_man
    Hi, In many posts concerning this topic I come across very simple examples that do not answer my question. Let's say a have a document table and user table. In DAL written in ADO.NET i have a method to retries all documents for some criteria. Now I the UI I have a case where I need to show this list along with the names of the creator. Up to know I have it done with one method in DAL containig JOIN statement. However eveytime I have such a complex method i have to do custom mapping to some object that doesn't mark 1:1 to DB. Should it be put into another layer ? If so then I will have to resing from join query for iteration through results and querying each document author. . . which doen't make sense... (performance) what is the best approach for such scenarios ?

    Read the article

  • How do I use compiler intrinsic __fmul_?

    - by Eric Thoma
    I am writing a massively parallel GPU application. I have been optimizing it by hand. I received a 20% performance increase with _fdividef(x, y), and according to The Cuda C Programming Guide (section C.2.1), using similar functions for multiplication and adding is also beneficial. The function is stated as this: "_fmulrn,rz,ru,rd". __fdividef(x,y) was not stated with the arguments in brackets. I was wondering, what are those brackets? If I run the simple code: int t = __fmul_(5,4); I a compiler error about how _fmul is undefined. I have the CUDA runtime included, so I don't think it is a setup thing; rather it is something to do with those square brackets. How do I correctly use this function? Thank you.

    Read the article

  • Algorithms for modern hardware?

    - by Jurily
    Once again, I find myself with a set of broken assumptions. The article itself is about a 10x performance gain by modifying a proven-optimal algorithm to account for virtual memory: What good is an O(log2(n)) algorithm if those operations cause page faults and slow disk operations? For most relevant datasets an O(n) or even an O(n^2) algorithm, which avoids page faults, will run circles around it. Are there more such algorithms around? Should we re-examine all those fundamental building blocks of our education? What else do I need to watch out for when writing my own?

    Read the article

  • Is re-using a Command and Connection object in ado.net a legitimate way of reducing new object creat

    - by Neil Trodden
    The current way our application is written, involves creating a new connection and command object in every method that access our sqlite db. Considering we need it to run on a WM5 device, that is leading to hideous performance. Our plan is to use just one connection object per-thread but it's also occurred to us to use one global command object per-thread too. The benefit of this is it reduces the overhead on the garbage collector created by instantiating objects all over the place. I can't find any advice against doing this but wondered if anyone can answer definitively if this is a good or bad thing to do, and why?

    Read the article

  • How to pull a RANDOM and UNIQUE record from SQL via LINQ.

    - by Jeremy H
    Okay, I found lots of posts on SO about how to pull a RANDOM item from the database when using LINQ. There seems to be a couple of differnet ways to handle this. What I need to do though is pull a RANDOM item from the database that the user has not seen before. The data I am pulling from the database is very small. Is there any way I can just hit the database once for 1000 records and then randomly scroll through those? Should I put a cookie on the users system recording the IDs of which items they have seen, pull a random record, check to see if it is seen and if so, pull from the database again? That seems like performance issues just waiting to happen. I don't expect anyone to code it for me, I am just looking for concepts and pointing in the right direction of how I should go about this. Need more details? Just let me know!

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >