Search Results

Search found 3422 results on 137 pages for 'optimization'.

Page 70/137 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Optimizing GDI+ drawing?

    - by user146780
    I'm using C++ and GDI+ I'm going to be making a vector drawing application and want to use GDI+ for the drawing. I'v created a simple test to get familiar with it: case WM_PAINT: GetCursorPos(&mouse); GetClientRect(hWnd,&rct); hdc = BeginPaint(hWnd, &ps); MemDC = CreateCompatibleDC(hdc); bmp = CreateCompatibleBitmap(hdc, 600, 600); SelectObject(MemDC,bmp); g = new Graphics(MemDC); for(int i = 0; i < 1; ++i) { SolidBrush sb(Color(255,255,255)); g->FillRectangle(&sb,rct.top,rct.left,rct.right,rct.bottom); } for(int i = 0; i < 250; ++i) { pts[0].X = 0; pts[0].Y = 0; pts[1].X = 10 + mouse.x * i; pts[1].Y = 0 + mouse.y * i; pts[2].X = 10 * i + mouse.x; pts[2].Y = 10 + mouse.y * i; pts[3].X = 0 + mouse.x; pts[3].Y = (rand() % 600) + mouse.y; Point p1, p2; p1.X = 0; p1.Y = 0; p2.X = 300; p2.Y = 300; g->FillPolygon(&b,pts,4); } BitBlt(hdc,0,0,900,900,MemDC,0,0,SRCCOPY); EndPaint(hWnd, &ps); DeleteObject(bmp); g->ReleaseHDC(MemDC); DeleteDC(MemDC); delete g; break; I'm wondering if I'm doing it right, or if I have areas killing the cpu. Because right now it takes ~ 1sec to render this and I want to be able to have it redraw itself very quickly. Thanks In a real situation would it be better just to figure out the portion of the screen to redraw and only redraw the elements withing bounds of this?

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • SQL -- How is DISTINCT so fast without an index?

    - by Jonathan
    Hi, I have a database with a table called 'links' with 600 million rows in it in SQLite. There are 2 columns in the database - a "src" column and a "dest" column. At present there are no indices. There are a fair number of common values between src and dest, but also a fair number of duplicated rows. The first thing I'm trying to do is remove all the duplicate rows, and then perform some additional processing on the results, however I've been encountering some weird issues. Firstly, SELECT * FROM links WHERE src=434923 AND dest=5010182. Now this returns one result fairly quickly and then takes quite a long time to run as I assume it's performing a tablescan on the rest of the 600m rows. However, if I do SELECT DISTINCT * FROM links, then it immediately starts returning rows really quickly. The question is: how is this possible?? Surely for each row, the row must be compared against all of the other rows in the table, but this would require a tablescan of the remaining rows in the table which SHOULD takes ages! Any ideas why SELECT DISTINCT is so much quicker than a standard SELECT?

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • Best way to reverse a string in C# 2.0

    - by Guy
    I've just had to write a string reverse function in C# 2.0 (i.e. LINQ not available) and came up with this: public string Reverse(string text) { char[] cArray = text.ToCharArray(); string reverse = String.Empty; for (int i = cArray.Length - 1; i > -1; i--) { reverse += cArray[i]; } return reverse; } Personally I'm not crazy about the function and am convinced that there's a better way to do it. Is there?

    Read the article

  • C#, Using Custom Generic Collection faster with objects than List

    - by Kaminari
    Hello, I'm using for now List< to iterate through some object collection and find matching element, The problem is that object has only 2 significant values Name and Link (strings) but has some other values wich I dont want to compare. I'm thinkig about using something like HashSet (wich is exactly what I'm searching for - fast) from .NET 3.5 but target framework has to be 2.0. There is something called Power Collections here: http://powercollections.codeplex.com/ But maybe there is other way? If not, can you suggest me a suitable custom collection?

    Read the article

  • Storing information in the DOM?

    - by John
    Im making a small private message application in the form of a phone. Ten messages are shown at the time. And the list of messages are scrolled up/down by hiding them. Just how bad is it to use the DOM to store information in this way. My main goal for doing this is to reduce calls to the database. And instead of making a new call all the time, it only checks if any new messages has arrived and ads the new message(s). Whats the alternative, cookies anyone? Thank you for the time

    Read the article

  • WPF Logical Tree - bottom up vs. top down

    - by Dor Rotman
    Hello, I've read the MSDN article about the layouts pass, that states: When a node is added or removed from the logical tree, property invalidations are raised on the node's parent and all its children. As a result, a top-down construction pattern should always be followed to avoid the cost of unnecessary invalidations on nodes that have already been validated. Now lets assume I do this. Won't the users see the control tree populate itself and the layout change several times during the control creation process? I want the whole control tree to just appear completely full. Thanks!

    Read the article

  • Does a c/c++ compiler optimize constant divisions by power-of-two value into shifts?

    - by porgarmingduod
    Question says it all. Does anyone know if the following... size_t div(size_t value) { const size_t x = 64; return value / x; } ...is optimized into? size_t div(size_t value) { return value >> 6; } Do compilers do this? (My interest lies in GCC). Are there situations where it does and others where it doesn't? I would really like to know, because every time I write a division that could be optimized like this I spend some mental energy wondering about whether precious nothings of a second is wasted doing a division where a shift would suffice.

    Read the article

  • Why use one dimensional array instead of a two dimensional arrray?

    - by user3869145
    I was doing some work handling a lot of information and my partner told me that I was using too many matrices to manipulate the variables of the problem. The idea was to use one dimension arrays int a[] instead of the 2 dimensional arrays int b[][], to save memory and processing speed of the algorithm. How certain is that this change will accelerate the speed of execution or compilation of my code in c ++?

    Read the article

  • Is there a way to optimise finding text items on a page (not regex)

    - by Jeepstone
    After seeing several threads rubbishing the regexp method of finding a term to match within an HTML document, I've used the Simple HTML DOM PHP parser (http://simplehtmldom.sourceforge.net/) to get the bits of text I'm after, but I want to know if my code is optimal. It feels like I'm looping too many times. Is there a way to optimise the following loop? //Get the HTML and look at the text nodes $html = str_get_html($buffer); //First we match the <body> tag as we don't want to change the <head> items foreach($html->find('body') as $body) { //Then we get the text nodes, rather than any HTML foreach($body->find('text') as $text) { //Then we match each term foreach ($terms as $term) { //Match to the terms within the text nodes $text->outertext = str_replace($term, '<span class="highlight">'.$term.'</span>', $text->outertext); } } } For example, would it make a difference to determine check if I have any matches before I start the loop maybe?

    Read the article

  • serving cached files based upon cookie?

    - by matthewsteiner
    So I realized something today. In my application, you really can't get anywhere (except the front page) unless you're logged in. And you can't be logged in without a cookie. So my front page could be cached, except the problem is if you are logged in (have a cookie set) then it should just redirect into the application. Is there a way for nginx to look for a cookie and if it finds it then deliver a cached file? Just an idea...

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • Minimizing distance to a weighted grid

    - by Andrew Tomazos - Fathomling
    Lets suppose you have a 1000x1000 grid of positive integer weights W. We want to find the cell that minimizes the average weighted distance.to each cell. The brute force way to do this would be to loop over each candidate cell and calculate the distance: int best_x, best_y, best_dist; for x0 = 1:1000, for y0 = 1:1000, int total_dist = 0; for x1 = 1:1000, for y1 = 1:1000, total_dist += W[x1,y1] * sqrt((x0-x1)^2 + (y0-y1)^2); if (total_dist < best_dist) best_x = x0; best_y = y0; best_dist = total_dist; This takes ~10^12 operations, which is too long. Is there a way to do this in or near ~10^8 or so operations?

    Read the article

  • Getting confused why i dont get expected amount ?

    - by Stackfan
    I have 1 result and which i will receive in Bank account, Based on that account i have to Put a balance to user account. How can you find the Handling cost from total tried 491.50 / 0.95 = 517.36 which is wrong ? It should be 500.00 (to my expectation) User balance requires 500.00 When 500.00 selected he gets 5% discount There is a handling cost for this ex: 1) Discount: 500.00 - 5% = 475.00 2) Handling cost: (475.00 x 0.034) + 0.35 = 16.50 3) Total: 475.00 + 16.50 = 491.50 So problem is from 491.50, i have to find atleast handling cost to get promised Balance. Any solution ? Cant figure it out myself...

    Read the article

  • [C++] Is it possible to roll a significantly faster version of sqrt

    - by John
    In an app I'm profiling, I found that in some scenarios this functions are able to take over 10% of total execution time. I've seen discussion over the years of faster sqrt implementations using sneaky floating-point trickery, but I don't know if such things are outdated on modern CPUs. MSVC++ 2008 compiler is being used, for reference... though I'd assume sqrt is not going to add much overhead though.

    Read the article

  • Java API Method Run Times

    - by Mike
    Is there a good resource to get run times for standard API functions? It's somewhat confusing when trying to optimize your program. I know Java isn't made to be particularly speedy but I can't seem to find much info on this at all. Example Problem: If I am looking for a certain token in a file is it faster to scan each line using string.contains(...) or to bring in say 100 or so lines putting them to a local string them performing contains on that chunk.

    Read the article

  • What is microbenchmarking?

    - by polygenelubricants
    I've heard this term used, but I'm not entirely sure what it means, so: What DOES it mean and what DOESN'T it mean? What are some examples of what IS and ISN'T microbenchmarking? What are the dangers of microbenchmarking and how do you avoid it? (or is it a good thing?)

    Read the article

  • Optimizing MySQL queries with IN operator

    - by Arkadiusz Kondas
    I have a MySQL database with a fairly large table where the products are. Each of them has its own id and categoryId field where there is a category id belongs to this product. Now I have a query that pulls out products from given categories such as: SELECT * FROM products WHERE categoryId IN ( 1, 2, 3, 4, 5, 34, 6, 7, 8, 9, 10, 11, 12 ) Of course, come a WHERE clause and ORDER BY sort but not in this thing. Let's say that these products is 250k and the visits are over 100k per day. Under such conditions in the table slow_log registered weight of these queries with large generation time. Do you have any ideas how to optimize the given problem? Table engine is MyISAM.

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • Improve C function performance with cache locality?

    - by Christoper Hans
    I have to find a diagonal difference in a matrix represented as 2d array and the function prototype is int diagonal_diff(int x[512][512]) I have to use a 2d array, and the data is 512x512. This is tested on a SPARC machine: my current timing is 6ms but I need to be under 2ms. Sample data: [3][4][5][9] [2][8][9][4] [6][9][7][3] [5][8][8][2] The difference is: |4-2| + |5-6| + |9-5| + |9-9| + |4-8| + |3-8| = 2 + 1 + 4 + 0 + 4 + 5 = 16 In order to do that, I use the following algorithm: int i,j,result=0; for(i=0; i<4; i++) for(j=0; j<4; j++) result+=abs(array[i][j]-[j][i]); return result; But this algorithm keeps accessing the column, row, column, row, etc which make inefficient use of cache. Is there a way to improve my function?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >