Search Results

Search found 90546 results on 3622 pages for 'code optimization'.

Page 152/3622 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Optimizing WordWrap Algorithm

    - by Milo
    I have a word-wrap algorithm that basically generates lines of text that fit the width of the text. Unfortunately, it gets slow when I add too much text. I was wondering if I oversaw any major optimizations that could be made. Also, if anyone has a design that would still allow strings of lines or string pointers of lines that is better I'd be open to rewriting the algorithm. Thanks void AguiTextBox::makeLinesFromWordWrap() { textRows.clear(); textRows.push_back(""); std::string curStr; std::string curWord; int curWordWidth = 0; int curLetterWidth = 0; int curLineWidth = 0; bool isVscroll = isVScrollNeeded(); int voffset = 0; if(isVscroll) { voffset = pChildVScroll->getWidth(); } int AdjWidthMinusVoffset = getAdjustedWidth() - voffset; int len = getTextLength(); int bytesSkipped = 0; int letterLength = 0; size_t ind = 0; for(int i = 0; i < len; ++i) { //get the unicode character letterLength = _unicodeFunctions.bringToNextUnichar(ind,getText()); curStr = getText().substr(bytesSkipped,letterLength); bytesSkipped += letterLength; curLetterWidth = getFont().getTextWidth(curStr); //push a new line if(curStr[0] == '\n') { textRows.back() += curWord; curWord = ""; curLetterWidth = 0; curWordWidth = 0; curLineWidth = 0; textRows.push_back(""); continue; } //ensure word is not longer than the width if(curWordWidth + curLetterWidth >= AdjWidthMinusVoffset && curWord.length() >= 1) { textRows.back() += curWord; textRows.push_back(""); curWord = ""; curWordWidth = 0; curLineWidth = 0; } //add letter to word curWord += curStr; curWordWidth += curLetterWidth; //if we need a Vscroll bar start over if(!isVscroll && isVScrollNeeded()) { isVscroll = true; voffset = pChildVScroll->getWidth(); AdjWidthMinusVoffset = getAdjustedWidth() - voffset; i = -1; curWord = ""; curStr = ""; textRows.clear(); textRows.push_back(""); ind = 0; curWordWidth = 0; curLetterWidth = 0; curLineWidth = 0; bytesSkipped = 0; continue; } if(curLineWidth + curWordWidth >= AdjWidthMinusVoffset && textRows.back().length() >= 1) { textRows.push_back(""); curLineWidth = 0; } if(curStr[0] == ' ' || curStr[0] == '-') { textRows.back() += curWord; curLineWidth += curWordWidth; curWord = ""; curWordWidth = 0; } } if(curWord != "") { textRows.back() += curWord; } updateWidestLine(); }

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Multiple ParticleSystems in cocos2d

    - by Mattias Akerman
    I wonder about what road I should go with ParticleSystem. In this particular case I want to create 1-20 small explosions at the same time but with different positions. Right now I'm creating a new ParticleSystem for each explosion and then release it, but of course this is very punishing to the performance. My question is: Is there a way to create one ParticleSystem with multiple emitting sources. If not should I create an array of ParticleSystem in init and then use a free one when an explosion is needed? Or is there another approach I haven't thought of?

    Read the article

  • How do you control what your C compiler Optimizes?

    - by Jordan S
    I am writing the firmware for an embedded device in C using the Silicon Labs IDE and the SDCC compiler. The device architecture is based on the 8051 family. The function in question is shown below. The function is used to set the ports on my MCU to drive a stepper motor. It gets called in by an interrupt handler. The big switch statement just sets the ports to the proper value for the next motor step. The bottom part of the function looks at an input from a hall effect sensor and a number of steps moved in order to detect if the motor has stalled. The problem is, for some reason the second IF statement that looks like this if (StallDetector > (GapSize + 20)) { HandleStallEvent(); } always seems to get optimized out. If I try to put a breakpoint at the HandleStallEvent() call the IDE gives me a message saying "No Address Correlation to this line number". I am not really good enough at reading assembly to tell what it is doing but I have pasted a snippet from the asm output below. Any help would be much appreciated. void OperateStepper(void) { //static bit LastHomeMagState = HomeSensor; static bit LastPosMagState = PosSensor; if(PulseMotor) { if(MoveDirection == 1) // Go clockwise { switch(STEPPER_POSITION) { case 'A': STEPPER_POSITION = 'B'; P1 = 0xFD; break; case 'B': STEPPER_POSITION = 'C'; P1 = 0xFF; break; case 'C': STEPPER_POSITION = 'D'; P1 = 0xFE; break; case 'D': STEPPER_POSITION = 'A'; P1 = 0xFC; break; default: STEPPER_POSITION = 'A'; P1 = 0xFC; } //end switch } else // Go CounterClockwise { switch(STEPPER_POSITION) { case 'A': STEPPER_POSITION = 'D'; P1 = 0xFE; break; case 'B': STEPPER_POSITION = 'A'; P1 = 0xFC; break; case 'C': STEPPER_POSITION = 'B'; P1 = 0xFD; break; case 'D': STEPPER_POSITION = 'C'; P1 = 0xFF; break; default: STEPPER_POSITION = 'A'; P1 = 0xFE; } //end switch } //end else MotorSteps++; StallDetector++; if(PosSensor != LastPosMagState) { StallDetector = 0; LastPosMagState = PosSensor; } else { if (PosSensor == ON) { if (StallDetector > (MagnetSize + 20)) { HandleStallEvent(); } } else if (PosSensor == OFF) { if (StallDetector > (GapSize + 20)) { HandleStallEvent(); } } } } //end if PulseMotor } ... and the asm output for the the bottom part of this function... ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:653: if(PosSensor != LastPosMagState) mov c,_P1_4 jb _OperateStepper_LastPosMagState_1_1,00158$ cpl c 00158$: jc 00126$ C$MotionControl.c$655$3$7 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:655: StallDetector = 0; clr a mov _StallDetector,a mov (_StallDetector + 1),a C$MotionControl.c$657$3$7 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:657: LastPosMagState = PosSensor; mov c,_P1_4 mov _OperateStepper_LastPosMagState_1_1,c ret 00126$: C$MotionControl.c$661$2$8 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:661: if (PosSensor == ON) jb _P1_4,00123$ C$MotionControl.c$663$4$9 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:663: if (StallDetector > (MagnetSize + 20)) mov a,_MagnetSize mov r2,a rlc a subb a,acc mov r3,a mov a,#0x14 add a,r2 mov r2,a clr a addc a,r3 mov r3,a clr c mov a,r2 subb a,_StallDetector mov a,r3 subb a,(_StallDetector + 1) jnc 00130$ C$MotionControl.c$665$5$10 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:665: HandleStallEvent(); ljmp _HandleStallEvent 00123$: C$MotionControl.c$668$2$8 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:668: else if (PosSensor == OFF) jnb _P1_4,00130$ C$MotionControl.c$670$4$11 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:670: if (StallDetector > (GapSize + 20)) mov a,#0x14 add a,_GapSize mov r2,a clr a addc a,(_GapSize + 1) mov r3,a clr c mov a,r2 subb a,_StallDetector mov a,r3 subb a,(_StallDetector + 1) jnc 00130$ C$MotionControl.c$672$5$12 ==. ; C:\SiLabs\Optec Programs\HSFW_HID_SDCC_2\MotionControl.c:672: HandleStallEvent(); C$MotionControl.c$678$2$1 ==. XG$OperateStepper$0$0 ==. ljmp _HandleStallEvent 00130$: ret It looks to me like the compiler is NOT optimizing out this second if statement from the looks of the asm but if that is the case why does the IDE not allow me so set a breakpoint there? Maybe it's just a dumb IDE!

    Read the article

  • Improving the speed of php

    - by cast01
    I'm currently working on a website in PHP, and I'm wondering what the best practices/methods are to reduce the time requests take. I've build the site in a modular way, so a page would consist of a number of modules, and each of these would need to request information. For example, I have a cart module, that (if a cart is set) will fetch the cart with the id (stored in a session variable) from the database and return its contents. I have another module that lists categories and this needs to fetch the categories from the database. My system is built with models, and each model might also make a request, for example a category model will make a request to get products in that category.

    Read the article

  • Is there a way to optimize this mysql query...?

    - by SpikETidE
    Hi Everyone... Say, I got these two tables.... Table 1 : Hotels hotel_id hotel_name 1 abc 2 xyz 3 efg Table 2 : Payments payment_id payment_date hotel_id total_amt comission p1 23-03-2010 1 100 10 p2 23-03-2010 2 50 5 p3 23-03-2010 2 200 25 p4 23-03-2010 1 40 2 Now, I need to get the following details from the two tables Given a particular date (say, 23-03-2010), the sum of the total_amt for each of the hotel for which a payment has been made on that particular date. All the rows that has the date 23-03-2010 ordered according to the hotel name A sample output is as follows... +------------+------------+------------+---------------+ | hotel_name | date | total_amt | commission | +------------+------------+------------+---------------+ | * abc | 23-03-2010 | 140 | 12 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p1 | 23-03-2010 | 100 | 10 || |+-----------+------------+------------+--------------+| || p4 | 23-03-2010 | 40 | 2 || |+-----------+------------+------------+--------------+| +------------+------------+------------+---------------+ | * xyz | 23-03-2010 | 250 | 30 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p2 | 23-03-2010 | 50 | 5 || |+-----------+------------+------------+--------------+| || p3 | 23-03-2010 | 200 | 25 || |+-----------+------------+------------+--------------+| +------------------------------------------------------+ Above the sample of the table that has to be printed... The idea is first to show the consolidated detail of each hotel, and when the '*' next to the hotel name is clicked the breakdown of the payment details will become visible... But that can be done by some jquery..!!! The table itself can be generated with php... Right now i am using two separate queries : One to get the sum of the amount and commission grouped by the hotel name. The next is to get the individual row for each entry having that date in the table. This is, of course, because grouping the records for calculating sum() returns only one row for each of the hotel with the sum of the amounts... Is there a way to combine these two queries into a single one and do the operation in a more optimized way...?? Hope i am being clear.. Thanks for your time and replies...

    Read the article

  • How's my pygame code?

    - by Isaiah
    I'm still getting the hang of lots of things and thought I should post some code I made with pygame and get some feedback^^. I posted code here: http://urlvars.com/code/snippet/39272/my-bouncing-program http://urlvars.com/code/snippet/39273/my-bouncing-program-classes There's tome things that I implemented that I'm not using yet I just realized like a timer at the bottom of the main while loop. If my code isn't readable, I'm sorry, I'm self taught and this is the first code I've ever posted anywhere. By the way I made some variables that take the screensize and half it to find a point to spit out the squares, but when I try to use it, it makes a weird effect :/ Try switching the list i have in the newbyte() function with the halfScreen variable and see it freak out o.O thank you

    Read the article

  • Position of least significant bit that is set

    - by peterchen
    I am looking for an efficient way to determine the position of the least significant bit that is set in an integer, e.g. for 0x0FF0 it would be 4. A trivial implementation is this: unsigned GetLowestBitPos(unsigned value) { assert(value != 0); // handled separately unsigned pos = 0; while (!(value & 1)) { value >>= 1; ++pos; } return pos; } Any ideas how to squeeze some cycles out of it? (Note: this question is for people that enjoy such things, not for people to tell me xyzoptimization is evil.) [edit] Thanks everyone for the ideas! I've learnt a few other things, too. Cool!

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • MySQL efficiency as it relates to the database/table size

    - by mlissner
    I'm building a system using django, Sphinx and MySQL that's very quickly becoming quite large. The database currently has about 2000 rows, and I've written a program that's going to populate it with another 40,000 rows in a couple days. Since the database is live right now, and since I've never had a database with this much information in it, I'm worried about some things: Is adding all these rows going to seriously degrade the efficiency of my django app? Will I need to go back through it and optimize all my database calls so they're doing things more cleverly? Or will this make the database slow all around to the extent that I can't do anything about it at all? If you scoff at my 40k rows, then, my next question is, at what point SHOULD I be concerned? I will likely be adding another couple hundred thousand soon, so I worry, and I fret. How is sphinx going to feel about all this? Is it going to freak out when it realizes it has to index all this data? Or will it be fine? Is this normal for it? If it is, at what point should I be concerned that it's too much data for Sphinx? Thanks for any thoughts.

    Read the article

  • FxCop giving a warning on private constructor CA1823 and CA1053

    - by Luis Sánchez
    I have a class that looks like the following: Public Class Utilities Public Shared Function blah(userCode As String) As String 'doing some stuff End Function End Class I'm running FxCop 10 on it and it says: "Because type 'Utilities' contains only 'static' ( 'Shared' in Visual Basic) members, add a default private constructor to prevent the compiler from adding a default public constructor." Ok, you're right Mr. FxCop, I'll add a private constructor: Private Utilities() Now I'm having: "It appears that field 'Utilities.Utilities' is never used or is only ever assigned to. Use this field or remove it." Any ideas of what should I do to get rid of both warnings?

    Read the article

  • cheapest way to draw a fullscreen quad

    - by Soubok
    I wondering if there is a faster way to draw a full-screen quad in OpenGL: NewList(); PushMatrix(); LoadIdentity(); MatrixMode(PROJECTION); PushMatrix(); LoadIdentity(); Begin(QUADS); Vertex(-1,-1,0); Vertex(1,-1,0); Vertex(1,1,0); Vertex(-1,1,0); End(); PopMatrix(); MatrixMode(MODELVIEW); PopMatrix(); EndList();

    Read the article

  • Whats faster in Javascript a bunch of small setInterval loops, or one big one?

    - by RobertWHurst
    Just wondering if its worth it to make a monolithic loop function or just add loops were they're needed. The big loop option would just be a loop of callbacks that are added dynamically with an add function. adding a function would look like this setLoop(function(){ alert('hahaha! I\'m a really annoying loop that bugs you every tenth of a second'); }); setLoop would add the function to the monolithic loop. so is the is worth anything in performance or should I just stick to lots of little loops using setInterval?

    Read the article

  • How to optimize neural network by using genetic algorithm?

    - by Billy Coen
    I'm quite new with this topic so any help would be great. What i need is to optimize a neural network in MATLAB by using GA. My network has [2x98] input and [1x98] target, i've tried consulting matlab help but im still kind of clueless about what to do :( so, any help would be appreciated. Thanks in advance. edit: i guess i didn't say what is there to be optimized as Dan said in the 1st answer. I guess most important thing is number of hidden neurons. And maybe number of hidden layers and training parameters like number of epochs or so. Sorry for not providing enough info, i'm still learning about this.

    Read the article

  • how to profile my code??

    - by kaki
    i want to how to profile my code... i have gone through the docs , but as there were no example codes given i could not get anything from it. i have a large code and it is taking so much time hence want to profile and increase its speed. i havent written my code in method , there are few in between but not completely. i dont have any main in my code..i want to know how to use profiling.. looking for some example or sample code of about how to profile.. i tried psyco i.e just addded two line at the top of my code import psyco psyco.full() is this write,it did not show any improvement. and other way of speeding up ,please suggest. thanks in advance..

    Read the article

  • explicit copy constructor or implicit parameter by value

    - by R Samuel Klatchko
    I recently read (and unfortunately forgot where), that the best way to write operator= is like this: foo &operator=(foo other) { swap(*this, other); return *this; } instead of this: foo &operator=(const foo &other) { foo copy(other); swap(*this, copy); return *this; } The idea is that if operator= is called with an rvalue, the first version can optimize away construction of a copy. So when called with a rvalue, the first version is faster and when called with an lvalue the two are equivalent. I'm curious as to what other people think about this? Would people avoid the first version because of lack of explicitness? Am I correct that the first version can be better and can never be worse?

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • ASP.NET 4.5 Bundling in Debug Mode - Stale Resources

    - by RPM1984
    Is there any way we can make the ASP.NET 4.5 Bundling functionality generate GUID's as part of the querystring when running in debug mode (e.g bundling turned OFF). The problem is when developing locally, the scripts/CSS files are generated like this: <script type="text/javascript" src="/Content/Scripts/myscript.js" /> So if i change that file, i need to do a hard-refresh (sometimes a few times) to get the file to be picked up by the browser - annoying. Is there any way we can make it render out like this: <script type="text/javascript" src="/Content/Scripts/myscript.js?v=x" /> Where x is a GUID (e.g always unique). Ideas? I'm on ASP.NET MVC 4.

    Read the article

  • set difference in SQL query

    - by TheObserver
    I'm trying to select records with a statement SELECT * FROM A WHERE LEFT(B, 5) IN (SELECT * FROM (SELECT LEFT(A.B,5), COUNT(DISTINCT A.C) c_count FROM A GROUP BY LEFT(B,5) ) p1 WHERE p1.c_count = 1 ) AND C IN (SELECT * FROM (SELECT A.C , COUNT(DISTINCT LEFT(A.B,5)) b_count FROM A GROUP BY C ) p2 WHERE p2.b_count = 1) which takes a long time to run ~15 sec. Is there a better way of writing this SQL?

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Code completion in NetBeans' python plugin does not work properly

    - by T.K.
    I am asking on StackOverflow because surely I am doing something completely silly and I hope S.O. might provide me with a quick answer. I've installed the latest stable Python-plugin for NetBeans. It works great, and I tested code completion with various packages such as sys, os and so on. It works beautifully. However, it does not seem to pick up the code completion for the code in my own project. I created a package called mypackage (it has _init_.py as well), and in it I have a module called mymodule.py. Inside mymodule I've put a class called MyClass, complete with doc-strings and all. Please refer to this screenshot to describe what happens in code-completion: As you see, it's suggesting irrelevant things, as opposed to just MyClass. (Note that if I execute mymodule.MyClass() it works 100%, it's just that I would really like code completion on my own code) Hope I'm just doing something silly here... Any ideas?

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Complicated idea - how to create car racing for my RPG game's players

    - by Donator
    So, I want to create car racing for my RPG game's players. Player can create race and choose how many participants can participate in race. After race is being created, other people can join it. When the maximum participants are collected, race begins. My idea, when the last participant joins, then instantly choose the winner (who's car is the best, that person wins), but how can I do it? If I choose to pick the winner after the last participant joins, then I have to put many queries in one page (select data from table, then delete the race, then select players' cars' statistics and pick the winner and then again, using mysql, send message to everyone). But this idea is really not optimal and it will lag cruelly for that last person. Maybe you have any ideas how I can avoid lag and make it more optimal. Thank you very much.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >