Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 250/3915 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • How to achieve maximum callback throughput with WCF duplex channels

    - by Schneider
    I have setup a basic WCF client/server which are communicating via Named pipes. It is a duplex contract with a callback. After the client "subscribes", a thread on the server just invokes the callback as quickly as possible. The problem is I am only getting a throughput of 1000 callbacks per second. And the payload is only an integer! I need to get closer to 10,000. Everything is essentially running with default settings. What can I look at to improve things, or should I just drop WCF for some other technology? Thanks

    Read the article

  • IIS 6.0 Server Too Busy HTTP 503 Connection_Dropped DefaultAppPool

    - by Shiraz Bhaiji
    We have a site which is running on a windows 2003 cluster with 2 64bit machines. The site needs to be able to cope with over 20,000 concurrent users One of the things that the site does is to allow the download of a 2MB file (which is cached in memory). We have low CPU and memory usage. We also have surplus bandwidth. It appears that we are running out of connections due the time it takes the user to download the file (some users have slow internet connections). In the IIS log we get HTTP 503 errors. In the HTTPErr log we get mainly Connection_Dropped DefaultAppPool with some Timer_EntityBody DefaultAppPool. Question is: How can we configure IIS to allow more connections? Or is there something that I am missing here? Thanks Shiraz

    Read the article

  • Why does calling setScaleX during pinch zoom gesture cause flicker?

    - by numan
    I am trying to create a zoomable container and I am targeting API 14+ In my onScale (i am using the ScaleGestureDetector to detect pinch-zoom) I am doing something like this: public boolean onScale (ScaleGestureDetector detector) { float scaleFactor = detector.getScaleFactor(); setScaleX(getScaleX() * scaleFactor); setScaleY(getScaleY() * scaleFactor); return true; }; It works but the zoom is not smooth. In fact it noticeably flickers. I also tried it with hardware layer thinking that the scaling would happen on the GPU once the texture was uploaded and thus would be super fast. But it made no difference - the zoom is not smooth and flickers weirdly sometimes. What am I doing wrong?

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by ming yeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Making the Android emulator run faster

    - by hgpc
    The Android emulator is a bit sluggish. For some devices, like the Motorola Droid and the Nexus One, the app runs faster in the actual device than the emulator. This is a problem when testing games and visual effects. How do you make the emulator run as fast as possible? I've been toying with its parameters but haven't found a configuration that shows a noticeable improvement yet.

    Read the article

  • PerformanceCounter.NextValue hangs on some machines.

    - by Poma
    I don't know why, but many computers hangs on following operation: void Init() { net1 = new List<PerformanceCounter>(); net2 = new List<PerformanceCounter>(); foreach (string instance in new PerformanceCounterCategory("Network Interface").GetInstanceNames()) { net1.Add(new PerformanceCounter("Network Interface", "Bytes Received/sec", instance)); net2.Add(new PerformanceCounter("Network Interface", "Bytes Sent/sec", instance)); } } //Once in 1 second void UpdateStats() { Status.Text = ""; for (int i = 0; i < net1.Count; i++) Status.Text += string.Format("{0}/{1} Kb/sec; ", net1[i].NextValue() / 1024, net2[i].NextValue() / 1024); } On some computes program hangs completely on first call of UpdateStats(), others experiencing 100% CPU load but program works (slowly). Other counters like new PerformanceCounter("Processor", "% Processor Time", "_Total") seems to work fine. Any suggwstions why is that?

    Read the article

  • Explain difference in SQLIO numbers for RAID 0 versus RAID 5 over 6 disks

    - by markn
    When using the SQLIO benchmark tool on a 4-core Dell server with 6 15k 450GB (fast) drives, RAID 0, we found the max throughput was 2MB per second. But when configured as RAID 5, we get 30 MB per second. It seems that the RAID controller, Dell Perc 5i integrated controller, is maxing out the throughput per disk. With RAID 5, I expect to get a bump due to stripping, but not a 15x difference. Like good programmers, we suspect the hardware , but we could be missing something. This is predominately write traffic.

    Read the article

  • Locking Cache Key without Locking the entire Cache

    - by Gandalf
    I have servlets that caches user information rather then retrieving it from the user store on every request (shared Ehcache). The issue I have is that if a client is multi-threaded and they make more then one simultaneous request, before they have been authenticated, then I get this in my log: Retrieving User [Bob] Retrieving User [Bob] Retrieving User [Bob] Returned [Bob] ...caching Returned [Bob] ...caching Returned [Bob] ...caching What I would want is that the first request would call the user service, while the other two requests get blocked - and when the first request returns, and then caches the object, the other two requests go through: Retrieving User [Bob] blocking... blocking... Returned [Bob] ...caching [Bob] found in cache [Bob] found in cache I've thought about locking on the String "Bob" (because due to interning it's always the same object right?). Would that work? And if so how do I keep track of the keys that actually exist in the cache and build a locking mechanism around them that would then return the valid object once it's retrieved. Thanks.

    Read the article

  • Wcf IInstanceProvider Behaviour never calling Realease() ?

    - by Jon
    Hi, I'm implementing my own IInstanceProvider class to override the creation and realease of new service instances but the Release() method never gets called on my implemented class? It's implemented using an IServiceBehavior to attach to the exposed endpoint. No matter how hard we hammer the service the Relaease() method nevers gets called. We have the service running a per call instanceContext mode with 50 instance max. The deconstruct of the service instance gets called but not on all created instance and this looks like the gargageCollection rather than wcf realeasing and disposing. Any ideas why the Release() method never gets called? Thanks in Advance, Jon

    Read the article

  • Fiscal year handling strategies in database design

    - by Sapphire
    By fiscal year I mean all the data in the database (in all tables) that occurred in the particular year. Lets say that we are building an application that allows user to choose from different years. What way of implementing this would you prefer, and why: Separate fiscal year data based on multiple separate database instances (for example, on every fiscal year start you could create a new instance with no data) Have everything in one database, but with logic that automatically separates records from different years. Personally, I have "seen" both methods, and I would choose the second. The only argument I can think of for the first method is to have less records in case that these are really big databases - but still, you could "archive" old records by joining them in summaries or by some other way. What do you think?

    Read the article

  • Visual C# GUI Designer - Recommended way of removing generated event handler-code & basic tutorial

    - by cusack
    Hi, I'm new to the Visual C# designer so these are general and pretty basic question on how to work with the designer. When we for instance add a label to a form and then double-click on it in the Visual C# designer (I'm using Microsoft Visual C# 2008 Express Edition), the following things happen: The designer generates code within Form1.Designer.cs (assume default names for simplicity) to add the label, then with the double-click it will add the event handler label1_Click to the label within Form1.Designer.cs, using the following code this.label1.Click += new System.EventHandler(this.label1_Click); and it adds the event handler method to Form1.cs private void label1_Click(object sender, EventArgs e) { } If I now remove the label only the code within Form1.Designer.cs will be removed but the label1_Click method will stay within Form1.cs even if it isn't used by anything else. But if I'm using reset within Properties-Events for the Click-event from within the designer even the label1_Click method in Form1.cs will be removed. 1.) Isn't that a little inconsistent behavior? 2.) What is the recommended way of removing such generated event handler-code? 3.) What is the best "mental approach"/best practice for using the designer? I would approach it by mental separation in the way that Form1.cs is 100% my responsibility and that on the other hand I'm not touching the code in Form1.Designer.cs at all. Does that make sense or not? Since sometimes the designer removes sth. from Form1.cs I'm not sure about this. 4.) Can you recommend a simple designer tutorial that assumes no Visual C# designer knowledge but expects/doesn't explain C#. The following one is an example of what I would not want it explains what a c#-comment is and I'd prefer text over video as well: http://msdn.microsoft.com/en-us/beginner/bb964631.aspx

    Read the article

  • How to speedup perforce auto resolve?

    - by Sorin Sbarnea
    I would like to know how to speedup the perforce auto resolve when doing integration (merge yours and theirs if no conflicts exists). Currently is taking hours for ~5000 files when running it using a proxy server even if the proxy server has the files pre-cached. Also p4v interface doesn't give you any hint regarding the progress of the task, you do not know if it will finish in a second or next year.

    Read the article

  • Google Datastore w/ JDO: Access Times?

    - by Bosh
    I'm hitting what appears (to me) strange behavior when I pull data from the google datastore over JDO. In particular, the query executes quickly (say 100 ms), but finding the size of the resulting List< takes about one second! Indeed, whatever operation I try to perform on the resulting list takes about a second. Has anybody seen this behavior? Is it expected? Unusual? Any way around it? PersistenceManager pm = PMF.getPersistenceManager(); Query q = pm.newQuery("select from " + Person.class.getName() +" order by key limit 1000 "); System.out.println("getting all at " + System.currentTimeMillis()); mcs = (List<Med>) q.execute(); System.out.println("got all at " + System.currentTimeMillis()); int size = mcs.size(); System.out.println("size was " + size + " at " + System.currentTimeMillis()); getting all at 1271549139441 got all at 1271549139578 size was 850 at 1271549141071 -B

    Read the article

  • Really slow obtaining font metrics.

    - by Artur
    So the problem I have is that I start my application by displaying a simple menu. To size and align the text correctly I need to obtain font metrics and I cannot find a way to do it quickly. I tested my program and it looks like whatever method I use to obtain font metrics the first call takes over 500 milliseconds!? Because of it the time it takes to start-up my application is much longer than necessary. I don't know if it is platform specific or not, but just in case, I'm using Mac OS 10.6.2 on MacBook Pro (hardware isn't an issue here). If you know a way of obtaining font metrics quicker please help. I tried these 3 methods for obtaining the font metrics and the first call is always very slow, no matter which method I choose. import java.awt.Font; import java.awt.FontMetrics; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.font.FontRenderContext; import java.awt.font.LineMetrics; import javax.swing.JFrame; public class FontMetricsTest extends JFrame { public FontMetricsTest() { setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } @Override public void paint(Graphics g) { Graphics2D g2 = (Graphics2D) g; Font font = new Font("Dialog", Font.BOLD, 10); long start = System.currentTimeMillis(); FontMetrics fontMetrics = g2.getFontMetrics(font); // LineMetrics fontMetrics1 = // font.getLineMetrics("X", new FontRenderContext(null, false, false)); // FontMetrics fontMetrics2 = g.getFontMetrics(); long end = System.currentTimeMillis(); System.out.println(end - start); g2.setFont(font); } public static void main(String[] args) { new FontMetricsTest(); } }

    Read the article

  • Illegal Character when trying to compile java code

    - by muckdog12
    I have a program that allows a user to type java code into a rich text box and then compile it using the java compiler. Whenever I try to compile the code that I have written I get an error that says that I have an illegal character at the beginning of my code that is not there. This is the error the compiler is giving me: C:\Users\Travis Michael>"\Program Files\Java\jdk1.6.0_17\bin\javac" Test.java Test.java:1: illegal character: \187 n++public class Test ^ Test.java:1: illegal character: \191 n++public class Test ^ 2 errors

    Read the article

  • which metric(s) show the difference between object-oriented and procedural code

    - by twieger
    Which metric(s) could help to indicate that i have procedural code instead of object-oriented code? I would like to have a set of simple metrics, which indicate with a high probability, that the analyzed code contains procedural transaction scripts and an anemic domain model instead of following sound object-oriented design principles. Would be happy about any set of useful metrics and tools for measuring. Thanks, Thomas!

    Read the article

  • MSSQL EXPRESS 2008 Stored Procedure execution time spikes periodically

    - by user156241
    I have a big stored procedure on a MSSQL 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too. It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences. I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server. Also, The recovery options for all the databases are set to "Simple".

    Read the article

  • Optimizations employed by ORM's

    - by Kartoch
    I'm teaching JEE, especially JPA, Spring and Spring MVC. As I have not so much experience in large projects, it is difficult to know what to present to students about optimisation of ORM. At the present time, I present some classic optimisation tricks: prepared statements (most of ORM implicitely uses it by default) first and second-level caches "write first, optimize later" it is possible to switch off ORM and send SQL commands directly to the database for very frequent, specialized and costly requests Is there any other point the community see about other way to optimize ORM ? I'm especially interested by DAO patterns...

    Read the article

  • Why aren't we programming on the GPU???

    - by Chris
    So I finally took the time to learn CUDA and get it installed and configured on my computer and I have to say, I'm quite impressed! Here's how it does rendering the Mandelbrot set at 1280 x 678 pixels on my home PC with a Q6600 and a GeForce 8800GTS (max of 1000 iterations): Maxing out all 4 CPU cores with OpenMP: 2.23 fps Running the same algorithm on my GPU: 104.7 fps And here's how fast I got it to render the whole set at 8192 x 8192 with a max of 1000 iterations: Serial implemetation on my home PC: 81.2 seconds All 4 CPU cores on my home PC (OpenMP): 24.5 seconds 32 processors on my school's super computer (MPI with master-worker): 1.92 seconds My home GPU (CUDA): 0.310 seconds 4 GPUs on my school's super computer (CUDA with static domain decomposition): 0.0547 seconds So here's my question - if we can get such huge speedups by programming the GPU instead of the CPU, why is nobody doing it??? I can think of so many things we could speed up like this, and yet I don't know of many commercial apps that are actually doing it. Also, what kinds of other speedups have you seen by offloading your computations to the GPU?

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • Alternative for my preg_replace code

    - by Ben Sinclair
    Here is my code... basically it finds any page-NUMBER- within a variable and then replaces it with a page url from an array $content_text = preg_replace("/page-(\d+)-/sie", '$pageurl[$1]', $content_text); It works a treat until the NUMBER it finds isn't in the array and it returns an error... Is there another efficient way I could do this instead? I liked my code above because it was simple but I may have to use more complex code...

    Read the article

  • Was Joel right about XML being slow?

    - by Will
    A long time ago Joel explained how various every-day coding things were slow, and this led to XML as a data store being slow: http://www.joelonsoftware.com/articles/fog0000000319.html Are those every-day coding things - strcat and malloc - still slow in a std::string and dlmalloc world? What else has changed in modern processors and mainstream frameworks? And is XML still slow? You can't find an RDBMS that doesn't claim some kind of native XML support these days; haven't they got it faster - a single pass to index it for example - yet?

    Read the article

  • How to speed up WPF programs?

    - by Sam
    I love programming with and for Windows Presentation Framework. Mostly I write browser-like apps using WPF and XAML. But what really annoys me is the slowness of WPF. A simple page with only a few controls loads fast enough, but as soon as a page is a teeny weeny bit more complex, like containing a lot of data entry fields, one or two tab controls, and stuff, it gets painful. Loading of such a page can take more than one second. Seconds, indeed, especially on not so fast computers (read: the customers computers) it can take ages. Same with changing values on the page. Everything about the WPF UI is somehow sluggy. This is so mean! They give me this beautiful framework, but make it so excruciatingly slow so I'll have to apologize to our customers all the time! My Question: How do you speed up WPF? How do you profile bottlenecks? How do you deal with the slowness? Since this seems to be an universal problem with WPF, I'm looking for general advice, useful for many situations and problems. Some other related questions: What tools do you use for WPF development Tools to develop WPF or Silverlight applications

    Read the article

  • WPF slow to start on x64 in .NET Framework 4.0

    - by Robert Fraser
    I've noticed that if I build my WPF application for Any CPU/x64, it takes MUCH longer to start (on the order of about 20 seconds) or to load new controls than it does if started on x86 (in release & debug modes, inside or outside of VS). This occurs with even the simplest WPF apps. The problem is discussed in this MSDN thread, but no answer was provided there. This happens only with .NET 4.0 -- in 3.5 SP1, x64 was just as fast as x86. Interestingly, Microsoft seems to know about this problem since the default for a new WPF project in VS2010 is x86. Is this a real bug or am I just doing it wrong? EDIT: Possibly related to this: http://stackoverflow.com/questions/2788215/slow-databinding-setup-time-in-c-net-4-0. I'm using data binding heavily.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >