Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 238/530 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • How to figure the read/write ratio in Sql Server?

    - by Bill Paetzke
    How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of? Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly. Why do I care? I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users. One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read uncommitted snapshot) like Jeff Atwood did with StackOverflow.

    Read the article

  • Managing EntityConnection lifetime

    - by kervin
    There have been many question on managing EntityContext lifetime, e.g. http://stackoverflow.com/questions/813457/instantiating-a-context-in-linq-to-entities I've come to the conclusion that the entity context should be considered a unit-of-work and therefore not reused. Great. But while doing some research for speeding up my database access, I ran into this blog post... Improving Entity Framework Performance The post argues that EFs poor performance compared to other frameworks is often due to the EntityConnection object being created each time a new EntityContext object is needed. To test this I manually created a static EntityConnection in Global.asax.cs Application_Start(). I then converted all my context using statements to using( MyObjContext currContext = new MyObjeContext(globalStaticEFConnection) { .... } This seems to have sped things up a bit without any errors so far as far as I can tell. But is this safe? Does using a applicationwide static EntityConnection introduce race conditions? Best regards, Kervin

    Read the article

  • Entity Framework VS LINQ to SQL VS ADO.NET with stored procedures?

    - by BritishDeveloper
    How would you rate each of them in terms of: Performance Speed of development Neat, intuitive, maintainable code Flexibility Overall I like my SQL and so have always been a die-hard fan of ADO.NET and stored procedures but I recently had a play with Linq to SQL and was blown away by how quickly I was writing out my DataAccess layer and have decided to spend some time really understanding either Linq to SQL or EF... or neither? I just want to check, that there isn't a great flaw in any of these technologies that would render my research time useless. E.g. performance is terrible, it's cool for simple apps but can only take you so far

    Read the article

  • How does c# type safety affect the garbage collection?

    - by Indeera
    I'm dealing with code that handles large buffers ( 100MB) and manipulation of these is done in unsafe blocks. I'd like to refactor these to avoid unsafe code. I'm wondering about the likely memory performance gains (positive/negative/neutral) before I embark on that. I assert that if the compiler can verify types, it could possibly generate better code and that could also mean good GC performance. Is this a valid assertion? What is your experience? Thanks.

    Read the article

  • WPF DataGrid Vs Windows Forms DataGridView

    - by Mrk Mnl
    I have experience in WPF and Windows Forms, however have only used the Windows Forms DataGridView and not the WPF DataGrid (which was only included in .Net 4 or could be added to .Net 3.5 from Codeplex, I understand). I am about to devlop an app using one of these controls heavily for large amounts of data and have read performance is an issue with the WPF DataGrid so I may stick to the Windows Forms DataGridView.. Is this the case? I do not want to use a 3rd party control. Does the Windows Forms DataGridView offer significant performance over the WPF DataGrid for large amounts of data? If I were to use WPF I would prefer to use .Net 3.5S SP1, unless the DataGrid in the .Net 4 is significantly better? Also I want to use ADO with DataTable's which I feel is better suited to Windows Forms..

    Read the article

  • [SWT/RCP] Alpha blending is slow on linux

    - by elgcom
    we are developing an SWT/RCP(Eclipse 3.5) application on both Windows and Linux (on identical hardware). The application is a GIS app which shows several layered maps(PNG images) rendered with alpha blending. org.eclipse.draw2d.Graphics.setAlpha(...); org.eclipse.draw2d.Graphics.drawImage(...); On Windows the performance is pretty good, but on Linux it is very poor. is that a Linux(GTK/KDE) problem? or is there any workaround to improve the performance on Linux?

    Read the article

  • Is AMQP suitable as both an intra and inter-machine software bus?

    - by Bwooce
    I'm trying to get my head around AMQP. It looks great for inter-machine (cluster, LAN, WAN) communication between applications but I'm not sure if it is suitable (in architectural, and current implementation terms) for use as a software bus within one machine. Would it be worth pulling out a current high performance message passing framework to replace it with AMQP, or is this falling into the same trap as RPC by blurring the distinction between local and non-local communication? I'm also wary of the performance impacts of using a WAN technology for intra-machine communications, although this may be more of an implementation concern than architecture. War stories would be appreciated.

    Read the article

  • How to reduce java concurrent mode failure and excessive gc

    - by jimx
    In Java, the concurrent mode failure means that the concurrent collector failed to free up enough memory space form tenured and permanent gen and has to give up and let the full stop-the-world gc kicks in. The end result could be very expensive. I understand this concept but never had a good comprehensive understanding of A) what could cause a concurrent mode failure and B) what's the solution?. This sort of unclearness leads me to write/debug code without much of hints in mind and often has to shop around those performance flags from Foo to Bar without particular reasons, just have to try. I'd like to learn from developers here how your experience is. If you had previous encountered such performance issue, what was the cause and how you addressed it? If you have coding recommendations, please don't be too general. Thanks!

    Read the article

  • from ggplot2 to OOo workflow?

    - by Andreas
    This is not really a programming question, but I try here none the less. I once used latex for my reports. But the people I work with needs to make small edits and do not have latex skillz. Openoffice is then the way to go. But saving ggplot images with dpi 100 makes for really ugly graphs. dpi = 600 is a no go (e.g. huge legend). So what to do? I currently save (still via ggsave) to eps - which openoffice can import. But performance is not good at all. Googling I found a bug for the poor eps performance in OOo, and also talk about a non-implemented svg feature. But none helps me right now. If you work with ggplot2 and OOo - What do you do? I have been unsuccesfull with pdf conversion for some reason.

    Read the article

  • CgiModule and FastCgiModule in IIS7

    - by hari
    My web server is IIS7 running on Windows 2008 Web edition. There are nearly 40 modules when checked pre-installed "Modules". It also having "CgiModule and FastCgiModules". All the websites installed on this server purely runs with ASP.NET technology. Can I remove these two modules to improve performance? Same way, my application uses "Forms Authentication" only. In such case can I delete "Windows Authentication and WindowsAuthenticationModule"?. Also please suggest if any other modules can be deleted to improve performance.

    Read the article

  • Compression Array of Bytes

    - by Pedro Magalhaes
    Hi, My problem is: I want to store a array of bytes in compressed file, and then I want to read it with a good performance. So I create a array of bytes then pass to a ZLIB algorithm then store it in the file. For my surprise the algorithm doesn't work well., probably because the array is a random sample. Using this approach, it will will be ber easy to read. Just copy the stream to memory, decompress them and copy it to a array of bytes. But i need to compress the file. Do I have to use a algorithm, like RLE, for compresse the byte array? I think that I can store the byte array like a string and then compress it. But i think I am going to have a poor performance on reading data. Sorry for my poor english. Thanks

    Read the article

  • How do we achieve "substring-match" under O(n) time?

    - by Pacerier
    I have an assignment that requires reading a huge file of random inputs, for example: Adana Izmir Adnan Menderes Apt Addis Ababa Aden ADIYAMAN ALDAN Amman Marka Intl Airport Adak Island Adelaide Airport ANURADHAPURA Kodiak Apt DALLAS/ADDISON Ardabil ANDREWS AFB etc.. If I specify a search term, the program is supposed to find the lines whereby a substring occurs. For example, if the search term is "uradha", the program is supposed to show ANURADHAPURA. If the search term is "airport", the program is supposed to show Amman Marka Intl Airport, Adelaide Airport A quote from the assignment specs: "You are to program this application taking efficiency into account as though large amounts of data and processing is involved.." I could easily achieve this functionality using a loop but the performance would be O(n). I was thinking of using a trie but it seems to only work if the substring starts from index 0. I was wondering what solutions are there which gives a performance better than O(n)?

    Read the article

  • Is there a difference between Perl's shift versus assignment from @_ for subroutine parameters?

    - by cowgod
    Let us ignore for a moment Damian Conway's best practice of no more than three positional parameters for any given subroutine. Is there any difference between the two examples below in regards to performance or functionality? Using shift: sub do_something_fantastical { my $foo = shift; my $bar = shift; my $baz = shift; my $qux = shift; my $quux = shift; my $corge = shift; } Using @_: sub do_something_fantastical { my ($foo, $bar, $baz, $qux, $quux, $corge) = @_; } Provided that both examples are the same in terms of performance and functionality, what do people think about one format over the other? Obviously the example using @_ is fewer lines of code, but isn't it more legible to use shift as shown in the other example? Opinions with good reasoning are welcome.

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

  • How to sum up an array of integers in C#

    - by Filburt
    Is there a better shorter way than iterating over the array? int[] arr = new int[] { 1, 2, 3 }; int sum = 0; for (int i = 0; i < arr.Length; i++) { sum += arr[i]; } clarification: Better primary means cleaner code but hints on performance improvement are also welcome. (Like already mentioned: splitting large arrays). It's not like I was looking for killer performance improvement - I just wondered if this very kind of syntactic sugar wasn't already available: "There's String.Join - what the heck about int[]?".

    Read the article

  • Comparison of collection datatypes in C#

    - by Joel in Gö
    Does anyone know of a good overview of the different C# collection types? I am looking for something showing which basic operations such as Add, Remove, RemoveLast etc. are supported, and giving the relative performance. It would be particularly interesting for the various generic classes - and even better if it showed eg. if there is a difference in performance between a List<T> where T is a class and one where T is a struct. A start would be a nice cheat-sheet for the abstract data structures, comparing Linked Lists, Hash Tables etc. etc. Thanks!

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • Inline function and calling cost in C

    - by Eonil
    I'm making a vector/matrix library. (GCC, ARM NEON, iPhone) typedef struct{ float v[4]; } Vector; typedef struct{ Vector v[4]; } Matrix; I passed struct data as pointer to avoid performance degrade from data copying when calling function. So I thought designed function like this: void makeTranslation(const Vector* factor, Matrix* restrict result); But, if function is inline, is there any reason to pass values as pointer for performance? Do those variables copied too? How about register and caches? inline Matrix makeTranslation(Vector factor) __attribute__ ((always_inline)); How do you think about calling costs of each cases?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >