Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 158/545 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • What are the internal limits to WCF

    - by ligos
    WCF has lots of limits imposed on it to protect against DoS attacks and other developer brainlessness. What are these limits? And how can they be overriden? Specifically, if I was to try to send a large number of messages in a short period of time to a variety of different clients, what are the internal WCF limits they may cause messages to be sent slowly. PS: this question is a much shorter version of my previous question that did not get much attention. EDIT I have answered the original question. The issue being my async calls were being turned into synchronous ones.

    Read the article

  • Every 3rd Insert Is Slow On Ms Sql 2008

    - by Chris
    I have a function that writes 3 lines into a empty table like so: INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 8, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 8, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 8, 3) For some reason only the third query takes a long time to execute - and with each insert it grows longer. Profiler Image I have tried disabling all constraints on the table - same result. I just can't figure out why the first two would run so fast - and the last one would take so long. Any help would be greatly appreciated. Here is the statistics for a query ran MSSMS: Query: ALTER TABLE [dbo].[yaf_ForumAccess] NOCHECK CONSTRAINT ALL INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 9, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 9, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 9, 3) ALTER TABLE [dbo].[yaf_ForumAccess] CHECK CONSTRAINT ALL Stats: Stats

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Delphi: How to diagnose sluggish UI?

    - by Ian Boyd
    i have a form, which you can pretend is laid out like Windows Explorer: panel on the left splitter client panel +------------+#+-----------------------+ | |#| | | |#| | | |#| | | |#| | | Left |#| Client | | |#| | | |#| | | |#| | | |#| | | |#| | +------------+#+-----------------------+ ^ | +----splitter The the left and client area panels are each rich with controls. The problem is that using the splitter is very sluggish. i would expect that a modern 2 GHz computer can re-display the form as fast as a human can push the mouse around. But that's definitely not the case, and it takes about 200-300 ms before the form is fully re-adjusted. The form has about 100 visual controls on it, no code, or custom controls. How do i go about tracing who's the cause of the sluggishness?

    Read the article

  • Java NIO Servlet to File

    - by Gandalf
    Is there a way (without buffering the whole Inputstream) to take the HttpServletRequest from a Java Servlet and write it out to a file using all NIO? Is it even worth trying? Will it be any faster reading from a normal java.io stream and writing to a java.nio Channel or do they both really need to be pure NIO to see a benefit? Thanks.

    Read the article

  • Hitting a memory limit slows down the .Net application

    - by derdo
    We have a 64bit C#/.Net3.0 application that runs on a 64bit Windows server. From time to time the app can use large amount of memory which is available. In some instances the application stops allocating additional memory and slows down significantly (500+ times slower).When I check the memory from the task manager the amount of the memory used barely changes. The application keeps on running very slowly and never gives an out of memory exception. Any ideas? Let me know if more data is needed.

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • C# DynamicMethod prelink

    - by soywiz
    I'm the author of a psp emulator made in C#. I'm generating lots of "DynamicMethod" using ILGenerator. I'm converting assembly code into an AST, and then generating IL code and building that DynamicMethod. I'm doing this in another thread, so I can generate new methods while the program is executing others so it can run smoothly. My problem is that the native code generation is lazy, so the machine code is generated when the function is called, not when the IL is generated. So it generates in the program executing thread, native code generation is prettly slow as it is the asm-ast-il step. I have tried the Marshal.Prelink method that it is suposed to generate the machine code before executing the function. It does work on Mono, but it doesn't work on MS .NET. Marshal.Prelink(MethodInfo); Is there a way of prelinking a DynamicMethod on MS .NET? I thought adding a boolean parameter to the function that if set, exits the function immediately so no code is actually executed. I could "prelink" that way, but I think that's a nasty solution I want to avoid. Any idea?

    Read the article

  • What setting needs to be made to make .Net Automation responsive?

    - by Greg
    Have an app that is looking for application windows being created on the desktop using class Unresponsive { private StructureChangedEventHandler m_UIAeventHandler = new StructureChangedEventHandler(OnStructureChanged); public Unresponsive() { Automation.AddStructureChangedEventHandler(AutomationElement.RootElement, TreeScope.Children, m_UIAeventHandler); } private void OnStructureChanged(object sender, StructureChangedEventArgs e) { Debug.WriteLine("Change event"); } } You can see the same issue using UISpy.exe, selecting the desktop and configuring scope for children and just the structure changed event. The problem I'm trying to resolve is that the events are not raised in a timely manner, there seems to be some grouping/delay which makes the app appear to be non responsive. If you start a new app with 1 window and wait a second you get the event, seems alright. If you start the same app several times without delay (say clicking on quickstart), it's not until all of the instances of the app get 'initialised' by the AutomationProxies that you get the notice for the first app (and in short order the other apps/windows). I've sat watching task manager as each instance of the app starts to grow as it is initialised, waiting until the last app is done and then seeing the events all come in. Similarly any time any apps are starting windows within a timeframe there seems to be some blocking. I can't see how to configure this timeframe, or get each structure changed event to be sent on as soon as it happens. Also, this process of listening for structure changed events seems to be leaking, just by listening there is a leak in native memory. (visible in UISpy and my app)

    Read the article

  • SQL Server 2008 BULK INSERT causes more reads than writes. Why?

    - by sh1ng
    I've huge a table (a few billion rows) with a clustered index and two non-clustered indices. A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms). It's very confusing to me. Why do reads exceed writes? How can I reduce it? update query insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS) No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%) And one more thing DB stored on RAID50 with stripe size 256K

    Read the article

  • Question about Cost in Oracle Explain Plan

    - by Will
    When Oracle is estimating the 'Cost' for certain queries, does it actually look at the amount of data (rows) in a table? For example: If I'm doing a full table scan of employees for name='Bob', does it estimate the cost by counting the amount of existing rows, or is it always a set cost?

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • JVM tuning on Amazon EC2

    - by Shadowman
    We will be deploying a production application to Amazon EC2 very shortly. Initially, we'll just be using a "small" instance, but have plans to scale up not long afterwards. My question is, has any investigation been done on JVM tuning for the EC2 environment? Are there any specific changes that we should make to our JVM parameters to compensate for quirks/characteristics of Amazon EC2? Or, do the normal tuning methodologies apply here as they would in a physical environment? Our application will be deployed on Tomcat 6.x. It is built using JBoss Seam 2.2.x, and uses PostgreSQL 8.x as the backend database. Any advice you can give is greatly appreciated!

    Read the article

  • Tricking the server to load files faster?

    - by Yongho
    If we have a website with multiple images and videos, I've read that it's best to serve them from other domains so that the browser can simultaneously download a bunch of files, rather than waiting one by one for each file to be downloaded. For example, if we have a website http://example.com/, we might consider serving: Videos from http://video.example.com/ Images from http://images.example.com/ etc. Question: can we achieve the simultaneous downloading by tricking the browser into believing that the files are hosted there, or do they actually need to be at that location? We can, for example, pretend to serve video from http://video.example.com/ when actually it's just a clever htaccess rewrite that ACTUALLY serves from http://example.com/video.php. In this case, the video is being served from the main domain but because we refer it as http://video.example.com/, it may think that it's another domain and thus load files simultaneously, rather than one by one. Is this feasible?

    Read the article

  • Java iteration reading & parsing

    - by Patrick Lorio
    I have a log file that I am reading to a string public static String Read (String path) throws IOException { StringBuilder sb = new StringBuilder(); InputStream in = new BufferedInputStream(new FileInputStream(path)); int r; while ((r = in.read()) != -1) { sb.append(r); } return sb.toString(); } Then I have a parser that iterates over the entire string once void Parse () { String con = Read("log.txt"); for (int i = 0; i < con.length; i++) { /* parsing action */ } } This is hugely a waste of cpu cycles. I loop over all the content in Read. Then I loop over all the content in Parse. I could just place the /* parsing action */ under the while loop in the Read method, which would be find but I don't want to copy the same code all over the place. How can I parse the file in one iteration over the contents and still have separate methods for parsing and reading? In C# I understand there is some sort of yield return thing, but I'm locked with Java. What are my options in Java?

    Read the article

  • Does the optimizer filter subqueries with outer where clauses

    - by Mongus Pong
    Take the following query: select * from ( select a, b from c UNION select a, b from d ) where a = 'mung' Will the optimizer generally work out that I am filtering a on the value 'mung' and consequently filter mung on each of the queries in the subquery. OR will it run each query within the subquery union and return the results to the outer query for filtering (as the query would perhaps suggest) In which case the following query would perform better : select * from ( select a, b from c where a = 'mung' UNION select a, b from d where a = 'mung' ) Obviously query 1 is best for maintenance, but is it sacrificing much performace for this? Which is best?

    Read the article

  • ASP.NET and SQL server with huge data sets

    - by Jake Petroules
    I am developing a web application in ASP.NET and on one page I am using a ListView with paging. As a test I populated the table it draws from with 6 million rows. The table and a schema-bound view based off it have all the necessary indexes and executing the query in SQL Server Management Studio with SELECT TOP 5 returned in < 1 second as expected. But on the ASP.NET page, with the same query, it seems to be selecting all 6 million rows without any limit. Shouldn't the paging control limit the query to return only N rows rather than the entire data set? How can I use these ASP.NET controls to handle huge data sets with millions of records? Does SELECT [columns] FROM [tablename] quite literally mean that for the ListView, and it doesn't actually inject a TOP <n> and does all the pagination at the application level rather than the database level?

    Read the article

  • Hanging Forms When I Drag And Move My Forms

    - by hosseinsinohe
    I Write a Windows Application By C Sharp. I Use a Picture in Background of my Form (MainForm) And I Use Many Picture in Buttons in This Form,And also I Use Some Panel And Label with Transparent Background Color. My Forms,Panels And Buttons has flicker. I solve this problem by a method in this thread. But Still when other Forms Start over this Form,my Forms hangs when I Drag and Move my Forms over this Form.How can I Solve this Problem to Move And Drags my Forms easily And Speed? Edit:: My Forms Load Data From Access 2007 DataBase file.I Use Datasets,DataGridViews And Other Components to Load And show Data in My Forms.

    Read the article

  • What's the fastest way to check if a word from one string is in another string?

    - by Mike Trpcic
    I have a string of words; let's call them bad: bad = "foo bar baz" I can keep this string as a whitespace separated string, or as a list: bad = bad.split(" "); If I have another string, like so: str = "This is my first foo string" What's the fasted way to check if any word from the bad string is within my comparison string, and what's the fastest way to remove said word if it's found? #Find if a word is there bad.split(" ").each do |word| found = str.include?(word) end #Remove the word bad.split(" ").each do |word| str.gsub!(/#{word}/, "") end

    Read the article

  • Why does dojo parsing time depend on css and images availability?

    - by Kniganapolke
    I have been profiling javascript on my page that uses dojo widgets. I don't use explicit parsing - the parser runs on page load. What I noticed is that if I clear browser cache before refreshing the page, dojo parsing takes much more time than if all the files are already cached. Note that we build all the required dojo modules into a layer (a single file), so we don't lazy-load any js files. I wonder if dojo parsing process depends on images and css resources, as far as I know it only instantiates widgets and injects dom nodes. Do you have any ideas why dojo parser runs longer (2-3 times longer in my case) when the cache is cleared?

    Read the article

  • Queue-like data structure with fast search and insertion

    - by Max
    I need a datastructure with the following properties: It contains integer numbers, no duplicates. After it reaches the maximal size the first element is removed. So if the capacity is 3, then this is how it would look when putting in it sequential numbers: {}, {1}, {1, 2}, {1, 2, 3}, {2, 3, 4}, {3, 4, 5} etc. Only two operations are needed: inserting a number into this container (INSERT) and checking if the number is already in the container (EXISTS). The number of EXISTS operations is expected to be approximately 2 * number of INSERT operations. I need these operations to be as fast as possible. What would be the fastest data structure or combination of data structures for this scenario?

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • php mysql database users connection handling

    - by aviv
    What is the best way to handle mysql database users connection in PHP? I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table. Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >