Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 169/1506 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Performance concern when using LINQ "everywhere"?

    - by stiank81
    After upgrading to ReSharper5 it gives me even more useful tips on code improvements. One I see everywhere now is a tip to replace foreach-statements with LINQ queries. Take this example: private Ninja FindNinjaById(int ninjaId) { foreach (var ninja in Ninjas) { if (ninja.Id == ninjaId) return ninja; } return null; } This is suggested replaced with the following using LINQ: private Ninja FindNinjaById(int ninjaId) { return Ninjas.FirstOrDefault(ninja => ninja.Id == ninjaId); } This looks all fine, and I'm sure it's no problem regarding performance to replace this one foreach. But is it something I should do in general? Or might I run into performance problems with all these LINQ queries everywhere?

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • performance monitoring tools for multi-tenant web application

    - by Anton
    We have a need to monitor performance of our java web app. We are looking for some tolls which can help us with this task. The major difficulty is that we are SaaS provider with multi-tenant server architecture with hundreds of customers running on the same hardware. So far we tried commercial products like DynaTrace and Coradinat but unfortunately they don't get the job done so far. What we need is a simple report which would tell us if we had performance problems on each customer site in a specified period of time. Mostly it will be response time per customer but also we will need some more specifics based on the URLs. please let me know if someone had any experience with setting up such monitoring. Thanks!

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • Wildcards in T-SQL LIKE vs. ASP.net parameters

    - by Vinzcent
    In my SQL statement I use wildcards. But when I try to select something, it never select something. While when I execute the query in Microsoft SQL Server Management Studio, it works fine. What am I doing wrong? Click handler protected void btnTitelAuteur_Click(object sender, EventArgs e) { cvalTitelAuteur.Enabled = true; cvalTitelAuteur.Validate(); if (Page.IsValid) { objdsSelectedBooks.SelectMethod = "getBooksByTitleAuthor"; objdsSelectedBooks.SelectParameters.Clear(); objdsSelectedBooks.SelectParameters.Add(new Parameter("title", DbType.String)); objdsSelectedBooks.SelectParameters.Add(new Parameter("author", DbType.String)); objdsSelectedBooks.Select(); gvSelectedBooks.DataBind(); pnlZoeken.Visible = false; pnlKiezen.Visible = true; } } In my Data Access Layer public static DataTable getBooksByTitleAuthor(string title, string author) { string sql = "SELECT 'AUTHOR' = tblAuthors.FIRSTNAME + ' ' + tblAuthors.LASTNAME, tblBooks.*, tblGenres.GENRE " + "FROM tblAuthors INNER JOIN tblBooks ON tblAuthors.AUTHOR_ID = tblBooks.AUTHOR_ID INNER JOIN tblGenres ON tblBooks.GENRE_ID = tblGenres.GENRE_ID " +"WHERE (tblBooks.TITLE LIKE '%@title%');"; SqlDataAdapter da = new SqlDataAdapter(sql, GetConnectionString()); da.SelectCommand.Parameters.Add("@title", SqlDbType.Text); da.SelectCommand.Parameters["@title"].Value = title; DataSet ds = new DataSet(); da.Fill(ds, "Books"); return ds.Tables["Books"]; }

    Read the article

  • Apache modules: C module vs mod_wsgi python module - Performance

    - by Gopal
    Hi A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Hidden features of PL/SQL

    - by Adam Paynter
    In light of the "Hidden features of..." series of questions, what little-known features of PL/SQL have become useful to you? Edit: Features specific to PL/SQL are preferred over features of Oracle's SQL syntax. However, because PL/SQL can use most of Oracle's SQL constructs, they may be included if they make programming in PL/SQL easier.

    Read the article

  • Visual Studio add-in for performance benchmarking

    - by chiccodoro
    I'd like to measure the performance of some code blocks in my c# winforms application. In particular I want to measure performance regression/improvement after some restructuring of the code. So long I've seen the System.Diagnostics.Stopwatch. However, I want to avoid writing measuring code into my classes, I would rather prefer to separate measuring from actual code. As for debugging, you can set breakpoints on several code lines and "jump" from one to the next by "Continue Execution", I imagine something similar for measuring: Mark to lines of code and make Visual Studio display the time elapsing from one to the next. Is there any feature/add-in in that direction?

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • What's the performance penalty of weak_ptr?

    - by Kornel Kisielewicz
    I'm currently designing a object structure for a game, and the most natural organization in my case became a tree. Being a great fan of smart pointers I use shared_ptr's exclusively. However, in this case, the children in the tree will need access to it's parent (example -- beings on map need to be able to access map data -- ergo the data of their parents. The direction of owning is of course that a map owns it's beings, so holds shared pointers to them. To access the map data from within a being we however need a pointer to the parent -- the smart pointer way is to use a reference, ergo a weak_ptr. However, I once read that locking a weak_ptr is a expensive operation -- maybe that's not true anymore -- but considering that the weak_ptr will be locked very often, I'm concerned that this design is doomed with poor performance. Hence the question: What is the performance penalty of locking a weak_ptr? How significant is it?

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • which toString() method can be used performance wise??

    - by Mrityunjay
    hi, I am working on one project for performance enhancement. I had one doubt, while we are during a process, we tend to trace the current state of the DTO and entity used. So, for this we have included toString() method in all POJOs for the same. I have now implemented toString() in three different ways which are following :- public String toString() { return "POJO :" + this.class.getName() + " RollNo :" + this.rollNo + " Name :" + this.name; } public String toString() { StringBuffer buff = new StringBuffer("POJO :").append(this.class.getName()).append(" RollNo :").append(this.rollNo).append(" Name :").append(this.name); return buff.toString(); } public String toString() { StringBuilder builder = new StringBuilder("POJO :").append(this.class.getName()).append(" RollNo :").append(this.rollNo).append(" Name :").append(this.name); return builder .toString(); } can anyone please help me to find out which one is best and should be used for enhancing performance.

    Read the article

  • C# debug vs release performance

    - by sagie
    Hi. I've encountered in the following paragraph: “Debug vs Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimisation. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimisation. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.” The source is here and the podcast is here. Can someone direct me to a microsoft an article that can actualy prove this?

    Read the article

  • java statistics collection for performance evaluation

    - by user384706
    What is the most efficient way to collect and report performance statistic analysis from an application? If I have an application that uses a series of network apis, and I want to report statistics at runtime, e.g. Method doA() was called 3 times and consumed on avg 500ms Method doB() was called 5 times and consumed on avg 1200ms etc Then, I thought of using a well defined data structure (of collection) that each thread updates per remote call, and this can be used for the report. But I think that it will make the performance worse, for the time spend for statistics collection. Am I correct? How would I procceed if I used a background thread for this, and the other threads that did the remote calls were unaware of this collection gathering? Thanks

    Read the article

  • What are some good code optimization methods?

    - by esac
    I would like to understand good code optimization methods and methodology. How do I keep from doing premature optimization if I am thinking about performance already. How do I find the bottlenecks in my code? How do I make sure that over time my program does not become any slower? What are some common performance errors to avoid (e.g.; I know it is bad in some languages to return while inside the catch portion of a try{} catch{} block

    Read the article

  • Android -- Object Creation/Memory Allocation vs. Performance

    - by borg17of20
    Hello all, This is probably an easy one. I have about 20 TextViews/ImageViews in my current project that I access like this: ((TextView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Text)).setText(""); //or ((ImageView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Right)).setVisibility(View.INVISIBLE); My question is this, am I better off, from a performance standpoint, just assigning these object variables? Further, am I losing some performance to the constant "search" process that goes on as a part of the findViewById(...) method? (i.e. Does findsViewById(...) use some sort of hashtable/hashmap for look-ups or does it implement an iterative search over the view hierarchy?) At present, my program never uses more than 2.5MB of RAM, so will assigning 20 or so more object variables drastically affect this? I don't think so, but I figured I'd ask. Thanks.

    Read the article

  • javascript object access performance

    - by youdontmeanmuch
    In Javascript, when your getting a property of an object, is there a performance penalty to getting the whole object vs only getting a property of that object? Also Keep in mind I'm not talking about DOM access these are pure simple Javascript objects. For example: Is there some kind of performance difference between the following code: Assumed to be faster but not sure: var length = some.object[key].length; if(length === condition){ // Do something that doesnt need anything inside of some.object[key] } else{ var object = some.object[key]; // Do something that requires stuff inside of some.object[key] } I think this would be slower but not sure if it matters. var object = some.object[key]; if(object.length === condition){ // Do something that doesnt need anything inside of some.object[key] } else{ // Do something that requires stuff inside of some.object[key] }

    Read the article

  • server performance: multiple external connections and performance

    - by websiteguru
    I am creating a php script that requires the server to make several cURL requests per run. I'll be running this script through cron every 3 minutes. Im looking to maximize the amount of cURL requests I can make in a 24 hr period. What I am wondering is if it would be better from a performance standpoint to get a dedicated server, or several small shared hosting accounts. With the problem being number of external connections and not system resources I'm wondering which is the best approach.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >