Search Results

Search found 5642 results on 226 pages for 'coding efficiency'.

Page 17/226 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • objective c coding guidelines

    - by Chandan Shetty SP
    Is there any pdf which tells about coding guidelines in objective C. For Example... 1. Breaking the function names - checkIfHitTheTrack. 2. member variables must be like - mVariableName. 3. Giving better names to subclass - ? Please share the related links...

    Read the article

  • the coding problem in server

    - by zahir hussain
    $fp = fopen("http://feeds.reuters.com/Reuters/PoliticsNews?format=xml","r") or die("Error reading RSS data."); The above coding working correctly in localhost;;; but in server display "Error reading RSS data."... i dont know why.... anybody please explain me... i am waiting... thanks

    Read the article

  • What benefits does IOC provide over soft-coding?

    - by dotnetdev
    Take the following article for example: http://weblogs.asp.net/psteele/archive/2009/11/23/use-dependency-injection-to-simplify-application-settings.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+dotnetmvp+%28Patrick+Steele%27s+.NET+Blog%29 I don't see what benefit there is from the IOC approach as opposed to the traditional soft-coding approach. Can someone tell me what I am missing? Thanks

    Read the article

  • essential reading for php coding (including databases)?

    - by tombull89
    Hello. I'm more of a SU and SF but now I'm after some help from the SO community. I'm dabbling in a bit of php coding with databases and am getting a bit stuck with relationships and the like. Can anybody reccomend some books, online or real, that would be a good start for someone new(ish) to php and mysql databases? Cheers!

    Read the article

  • pfc_Validation event coding example

    - by Brani
    Could you give me an example of the way I should code into the pfc_Validation event? This is an event that I have never used. For example here is something I have coded in the ue_itemchanged event. if dwo.name = 'theme' then This.Setitem(row,"theme",wf_clean_up_text(data)) end if if dwo.name = 'Comments' then This.Setitem(row,"Comments",wf_clean_up_text(data)) end if Which is the proper way of coding those validations in the pfc_Validation event , so that they are performed only on save-time?

    Read the article

  • PHP coding question?

    - by tag
    Does the following code below do the same thing and if so which one is better when coding? And is there a name for when PHP code is missing curly brackets? The PHP code. <?php if (isset($_POST['email'])) { echo $_POST['email']; }?> <?php if (isset($_POST['email'])) echo $_POST['email'];?>

    Read the article

  • coding on back button of navigation based application??

    - by hemant
    In one view during a function call i am assigning 1 to a flag...when i navigate back to previous view i want the the flag value to be retained to that view...how cAn i do coding on the button that appears on the screen to navigate back to the previous screen might be able to do it or is there any better solution???

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

  • Improve XPath efficiency for repeated, parameterized queries

    - by Chris Allan
    Hi, I am repeatedly performing the following XPath query (though parameterized by 'keywordText') around 40,000 times: String query = SystemGlobal.YAHOO_KEYWORDSSUBNODE + "/" + SystemGlobal.YAHOO_KEYWORDNODE + "[" + SystemGlobal.YAHOO_ATTRKEYPHRASE + "='" + keywordText + "']"; CachedXPathAPI cachedXPathAPI = new CachedXPathAPI(); NodeIterator nl = cachedXPathAPI.selectNodeIterator(doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), query); Node n; if ((n = nl.nextNode()) != null) { keyword.setKeywordId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYID).getTextContent())); keyword.setKeyPhrase(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYPHRASE).getTextContent()); keyword.setStatus(mapStatus(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRSTATUS).getTextContent())); keyword.setCampaignId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../../" + SystemGlobal.YAHOO_ATTRCAMPAIGNID).getTextContent())); keyword.setAdGroupId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../" + SystemGlobal.YAHOO_ATTRADGROUPID).getTextContent())); On the first run of the script, all 40,000 runs of this piece of code will have nl.nextNode() == null, and everything runs quite quickly. However, on the following runs, when nl.nextNode() != null, then things slow down a lot - this takes around an additional 40min to run (whereas the first run takes maybe 1 minute). Oh, and the doc is constructed like so: InputSource in = new InputSource(new FileInputStream(filename)); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); dfactory.setNamespaceAware(true); doc = dfactory.newDocumentBuilder().parse(in); I tried including the following lines reportEvaluator = new XPathEvaluatorImpl(reportDoc); reportResolver = reportEvaluator.createNSResolver(reportDoc); and rather creating a NodeIterator, instead creating an XPathResult: XPathResult result = (XPathResult)reportEvaluator.evaluate(query, doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), reportResolver, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null); however this ran even slower Is there a way in which I can speed up the running of this script? I have seen references to precompiled queries, though I haven't seen many actual details. Also, as seen in the code, I am using CachedXPathAPI, though the benefit for this case is not so great. Any help is much appreciated! Chris Allan

    Read the article

  • T/SQL Efficiency and Order of Execution

    - by Kyle Rozendo
    Hi All, In regards to the order of execution of statements in SQL, is there any difference between the following performance wise? SELECT * FROM Persons WHERE UserType = 'Manager' AND LastName IN ('Hansen','Pettersen') And: SELECT * FROM Persons WHERE LastName IN ('Hansen','Pettersen') AND UserType = 'Manager' If there is any difference, is there perhaps a link etc. that you may have where one can learn more about this? Thanks a ton, Kyle

    Read the article

  • F# currying efficiency?

    - by Eamon Nerbonne
    I have a function that looks as follows: let isInSet setElems normalize p = normalize p |> (Set.ofList setElems).Contains This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file: let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant() let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath. How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated. The above is just an example; I'm looking for a general pattern that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.

    Read the article

  • Increase efficiency of a loop with jQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • NSXMLParser Memory Allocation Efficiency for the iPhone

    - by Staros
    Hello, I've recently been playing with code for an iPhone app to parse XML. Sticking to Cocoa, I decided to go with the NSXMLParser class. The app will be responsible for parsing 10,000+ "computers", all which contain 6 other strings of information. For my test, I've verified that the XML is around 900k-1MB in size. My data model is to keep each computer in an NSDictionary hashed by a unique identifier. Each computer is also represented by a NSDictionary with the information. So at the end of the day, I end up with a NSDictionary containing 10k other NSDictionaries. The problem I'm running into isn't about leaking memory or efficient data structure storage. When my parser is done, the total amount of allocated objects only does go up by about 1MB. The problem is that while the NSXMLParser is running, my object allocation is jumping up as much as 13MB. I could understand 2 (one for the object I'm creating and one for the raw NSData) plus a little room to work, but 13 seems a bit high. I can't imaging that NSXMLParser is that inefficient. Thoughts? Code... The code to start parsing... NSXMLParser *parser = [[NSXMLParser alloc] initWithData: data]; [parser setDelegate:dictParser]; [parser parse]; output = [[dictParser returnDictionary] retain]; [parser release]; [dictParser release]; And the parser's delegate code... -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { if(mutableString) { [mutableString release]; mutableString = nil; } mutableString = [[NSMutableString alloc] init]; } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(self.mutableString) { [self.mutableString appendString:string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:@"size"]){ //The initial key, tells me how many computers returnDictionary = [[NSMutableDictionary alloc] initWithCapacity:[mutableString intValue]]; } if([elementName isEqualToString:hashBy]){ //The unique identifier if(mutableDictionary){ [mutableDictionary release]; mutableDictionary = nil; } mutableDictionary = [[NSMutableDictionary alloc] initWithCapacity:6]; [returnDictionary setObject:[NSDictionary dictionaryWithDictionary:mutableDictionary] forKey:[NSMutableString stringWithString:mutableString]]; } if([fields containsObject:elementName]){ //Any of the elements from a single computer that I am looking for [mutableDictionary setObject:mutableString forKey:elementName]; } } Everything initialized and released correctly. Again, I'm not getting errors or leaking. Just inefficient. Thanks for any thoughts!

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • Efficiency of nested Loop

    - by didxga
    See the following snippet: //first nested loops for(int i=0;i<10;i++) { for(int j=1;j<1000000;j++) { //do some stuff } } //second nested loops for(int i=0;i<1000000;i++) { for(int j=1;j<10;j++) { //do some stuff } } I am wondering why the first nested loops is running slower than the second one? Regards!

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • Efficiency of checking for null varbinary(max) column ?

    - by Moe Sisko
    Using SQL Server 2008. Example table : CREATE table dbo.blobtest (id int primary key not null, name nvarchar(200) not null, data varbinary(max) null) Example query : select id, name, cast((case when data is null then 0 else 1 end) as bit) as DataExists from dbo.blobtest Now, the query needs to return a "DataExists" column, that returns 0 if the blob is null, else 1. This all works fine, but I'm wondering how efficient it is. i.e. does SQL server need to read in the whole blob to its memory, or is there some optimization so that it just does enough reads to figure out if the blob is null or not ? (FWIW, the sp_tableoption "large value types out of row" option is set to OFF for this example).

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • Increase efficiency of a loop with jQuery and GameQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • Array sorting efficiency... Beginner need advice

    - by SoleSoft
    I'll start by saying I am very much a beginner programmer, this is essentially my first real project outside of using learning material. I've been making a 'Simon Says' style game (the game where you repeat the pattern generated by the computer) using C# and XNA, the actual game is complete and working fine but while creating it, I wanted to also create a 'top 10' scoreboard. The scoreboard would record player name, level (how many 'rounds' they've completed) and combo (how many buttons presses they got correct), the scoreboard would then be sorted by combo score. This led me to XML, the first time using it, and I eventually got to the point of having an XML file that recorded the top 10 scores. The XML file is managed within a scoreboard class, which is also responsible for adding new scores and sorting scores. Which gets me to the point... I'd like some feedback on the way I've gone about sorting the score list and how I could have done it better, I have no other way to gain feedback =(. I know .NET features Array.Sort() but I wasn't too sure of how to use it as it's not just a single array that needs to be sorted. When a new score needs to be entered into the scoreboard, the player name and level also have to be added. These are stored within an 'array of arrays' (10 = for 'top 10' scores) scoreboardComboData = new int[10]; // Combo scoreboardTextData = new string[2][]; scoreboardTextData[0] = new string[10]; // Name scoreboardTextData[1] = new string[10]; // Level as string The scoreboard class works as follows: - Checks to see if 'scoreboard.xml' exists, if not it creates it - Initialises above arrays and adds any player data from scoreboard.xml, from previous run - when AddScore(name, level, combo) is called the sort begins - Another method can also be called that populates the XML file with above array data The sort checks to see if the new score (combo) is less than or equal to any recorded scores within the scoreboardComboData array (if it's greater than a score, it moves onto the next element). If so, it moves all scores below the score it is less than or equal to down one element, essentially removing the last score and then places the new score within the element below the score it is less than or equal to. If the score is greater than all recorded scores, it moves all scores down one and inserts the new score within the first element. If it's the only score, it simply adds it to the first element. When a new score is added, the Name and Level data is also added to their relevant arrays, in the same way. What a tongue twister. Below is the AddScore method, I've added comments in the hope that it makes things clearer O_o. You can get the actual source file HERE. Below the method is an example of the quickest way to add a score to follow through with a debugger. public static void AddScore(string name, string level, int combo) { // If the scoreboard has not yet been filled, this adds another 'active' // array element each time a new score is added. The actual array size is // defined within PopulateScoreBoard() (set to 10 - for 'top 10' if (totalScores < scoreboardComboData.Length) totalScores++; // Does the scoreboard even need sorting? if (totalScores > 1) { for (int i = totalScores - 1; i > - 1; i--) { // Check to see if score (combo) is greater than score stored in // array if (combo > scoreboardComboData[i] && i != 0) { // If so continue to next element continue; } // Check to see if score (combo) is less or equal to element 'i' // score && that the element is not the last in the // array, if so the score does not need to be added to the scoreboard else if (combo <= scoreboardComboData[i] && i != scoreboardComboData.Length - 1) { // If the score is lower than element 'i' and greater than the last // element within the array, it needs to be added to the scoreboard. This is achieved // by moving each element under element 'i' down an element. The new score is then inserted // into the array under element 'i' for (int j = totalScores - 1; j > i; j--) { // Name and level data are moved down in their relevant arrays scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; // Score (combo) data is moved down in relevant array scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new Name, level and score (combo) data is inserted into the relevant array under element 'i' scoreboardTextData[0][i + 1] = name; scoreboardTextData[1][i + 1] = level; scoreboardComboData[i + 1] = combo; break; } // If the method gets the this point, it means that the score is greater than all scores within // the array and therefore cannot be added in the above way. As it is not less than any score within // the array. else if (i == 0) { // All Names, levels and scores are moved down within their relevant arrays for (int j = totalScores - 1; j != 0; j--) { scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new number 1 top name, level and score, are added into the first element // within each of their relevant arrays. scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; break; } // If the methods get to this point, the combo score is not high enough // to be on the top10 score list and therefore needs to break break; } } // As totalScores < 1, the current score is the first to be added. Therefore no checks need to be made // and the Name, Level and combo data can be entered directly into the first element of their relevant // array. else { scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; } } } Example for adding score: private static void Initialize() { scoreboardDoc = new XmlDocument(); if (!File.Exists("Scoreboard.xml")) GenerateXML("Scoreboard.xml"); PopulateScoreBoard("Scoreboard.xml"); // ADD TEST SCORES HERE! AddScore("EXAMPLE", "10", 100); AddScore("EXAMPLE2", "24", 999); PopulateXML("Scoreboard.xml"); } In it's current state the source file is just used for testing, initialize is called within main and PopulateScoreBoard handles the majority of other initialising, so nothing else is needed, except to add a test score. I thank you for your time!

    Read the article

  • NSDictionary, NSArray, NSSet and efficiency

    - by ryyst
    Hi, I've got a text file, with about 200,000 lines. Each line represents an object with multiple properties. I only search through one of the properties (the unique ID) of the objects. If the unique ID I'm looking for is the same as the current object's unique ID, I'm gonna read the rest of the object's values. Right now, each time I search for an object, I just read the whole text file line by line, create an object for each line and see if it's the object I'm looking for - which is basically the most inefficient way to do the search. I would like to read all those objects into memory, so I can later search through them more efficiently. The question is, what's the most efficient way to perform such a search? Is a 200,000-entries NSArray a good way to do this (I doubt it)? How about an NSSet? With an NSSet, is it possible to only search for one property of the objects? Thanks for any help! -- Ry

    Read the article

  • jQuery bind efficiency

    - by chelfers
    I'm having issue with load speed using multiple jQuery binds on a couple thousands elements and inputs, is there a more efficient way of doing this? The site has the ability to switch between product lists via ajax calls, the page cannot refresh. Some lists have 10 items, some 100, some over 2000. The issue of speed arises when I start flipping between the lists; each time the 2000+ item list is loaded the system drags for about 10 seconds. Before I rebuild the list I am setting the target element's html to '', and unbinding the two bindings below. I'm sure it has something to do with all the parent, next, and child calls I am doing in the callbacks. Any help is much appreciated. loop 2500 times <ul> <li><input type="text" class="product-code" /></li> <li>PROD-CODE</li> ... <li>PRICE</li> </ul> end loop $('li.product-code').bind( 'click', function(event){ selector = '#p-'+ $(this).prev('li').children('input').attr('lm'); $(selector).val( ( $(selector).val() == '' ? 1 : ( parseFloat( $(selector).val() ) + 1 ) ) ); Remote.Cart.lastProduct = selector; Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).prev('li').children('input').attr('code'), title : $(this).next('li').html(), quantity : $('#p-'+ $(this).prev('li').children('input').attr('lm') ).val(), price : $(this).prev('li').children('input').attr('price'), weight : $(this).prev('li').children('input').attr('weight'), taxable : $(this).prev('li').children('input').attr('taxable'), productId : $(this).prev('li').children('input').attr('productId'), links : $(this).prev('li').children('input').attr('productLinks') }, '#p-'+ $(this).prev('li').children('input').attr('lm'), false, ( parseFloat($(selector).val()) - 1 ) ); return false; }); $('input.product-qty').bind( 'keyup', function(){ Remote.Cart.lastProduct = '#p-'+ $(this).attr('lm'); Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).attr('code') , title : $(this).parent().next('li').next('li').html(), quantity : $(this).val(), price : $(this).attr('price'), weight : $(this).attr('weight'), taxable : $(this).attr('taxable'), productId : $(this).attr('productId'), links : $(this).attr('productLinks') }, '#p-'+ $(this).attr('lm'), false, previousValue ); });

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >