Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 140/537 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Should I use integer primary IDs?

    - by arthurprs
    For example, I always generate an auto-increment field for the users table, but I also specify a UNIQUE index on their usernames. There are situations that I first need to get the userId for a given username and then execute the desired query, or use a JOIN in the desired query. It's 2 trips to the database or a JOIN vs. a varchar index. Should I use integer primary IDs? Is there a real performance benefit on INT over small VARCHAR indexes?

    Read the article

  • Avoid IF statement after condition has been met

    - by greye
    I have a division operation inside a cycle that repeats many times. It so happens that in the first few passes through the loop (more or less first 10 loops) the divisor is zero. Once it gains value, a div by zero error is not longer possible. I have an if condition to test the divisor value in order to avoid the div by zero, but I am wondering that there is a performance impact that evaluating this if will have for each run in subsequent loops, especially since I know it's of no use anymore. How should this be coded? in Python?

    Read the article

  • Will PHP Die In Web Page Development World?

    - by Morgan Cheng
    I know that PHP is still the most popular web programming language in the world. This question just want to bring some of my concerns about PHP. PHP is naturally bound to C10K problem. Since PHP (generally run in Apache) cannot be event-driven or asynchronous, each HTTP request will occupy at least one thread or process. This makes it resistant to be more scalable. Currently, a lot of web sites (like Facebook) with high performance and scalability still depends on PHP in their front end servers. I suppose it is due to legacy reason. Is it possible that PHP will be replaced by language more suitable for C10K?

    Read the article

  • Pros and cons of sorting data in DB?

    - by Roman
    Let's assume I have a table with field of type VARCHAR. And I need to get data from that table sorted alphabetically by that field. What is the best way (for performance): add sort by field to the SQL-query or sort the data when it's already fetched? I'm using Java (with Hibernate), but I can't tell anything about DB engine. It could be any popular relational database (like MySQL or MS Sql Server or Oracle or HSQL DB or any other). The amount of records in table can vary greatly but let's assume there are 5k records.

    Read the article

  • Speed up math code in C# by writing a C dll?

    - by Projectile Fish
    I have a very large nested for loop in which some multiplications and additions are performed on floating point numbers. for (int i = 0; i < length1; i++) { s = GetS(i); c = GetC(i); for(int j = 0; j < length2; j++) { double oldU = u[j]; u[j] = c * oldU + s * omega[i][j]; omega[i][j] = c * omega[i][j] - s * oldU; } } This loop is taking up the majority of my processing time and is a bottleneck. Would I be likely to see any speed improvements if I rewrite this loop in C and interface to it from C#?

    Read the article

  • Under what circumstances does Groovy use AbstractConcurrentMap?

    - by Electrons_Ahoy
    (Specifically, org.codehaus.groovy.util.AbstractConcurrentMap) While doing some profiling of our application thats mixed Java/Groovy, I'm seeing a lot of references to the AbstractConcurrentMap class, none of which are explicit in the code base. Does groovy use this class when maps are instantiated in the groovy dynamic def myMap = [:] style? Are there rules somewhere about when groovy chooses to use this as opposed to, say, java.util.HashMap? And does anyone have any performance information comparing the two? My rough "eyeball check" says that AbstractConcurrentMap seems to be much slower - anyone know if I'm right?

    Read the article

  • C sharp: read last "n" lines of log file

    - by frictionlesspulley
    need a snippet of code which would read out last "n lines" of a log file. I came up with the following code from the net.I am kinda new to C sharp. Since the log file might be quite large, I want to avoid overhead of reading the entire file.Can someone suggest any performance enhancement. I do not really want to read each character and change position. var reader = new StreamReader(filePath, Encoding.ASCII); reader.BaseStream.Seek(0, SeekOrigin.End); var count = 0; while (count <= tailCount) { if (reader.BaseStream.Position <= 0) break; reader.BaseStream.Position--; int c = reader.Read(); if (reader.BaseStream.Position <= 0) break; reader.BaseStream.Position--; if (c == '\n') { ++count; } } var str = reader.ReadToEnd();

    Read the article

  • MongoDB efficient dealing with embedded documents

    - by Sebastian Nowak
    I have serious trouble finding anything useful in Mongo documentation about dealing with embedded documents. Let's say I have a following schema: { _id: ObjectId, ... data: [ { _childId: ObjectId // let's use custom name so we can distinguish them ... } ] } What's the most efficient way to remove everything inside data for particular _id? What's the most efficient way to remove embedded document with particular _childId inside given _id? What's the performance here, can _childId be indexed in order to achieve logarithmic (or similar) complexity instead of linear lookup? If so, how? What's the most efficient way to insert a lot of (let's say a 1000) documents into data for given _id? And like above, can we get O(n log n) or similar complexity with proper indexing? What's the most efficient way to get the count of documents inside data for given _id?

    Read the article

  • When calling CRUD check if "parent" exists with read or join?

    - by Trick
    All my entities can not be deleted - only deactivated, so they don't appear in any read methods (SELECT ... WHERE active=TRUE). Now I have some 1:M tables on this entities on which all CRUD operations can be executed. What is more efficient or has better performance? My first solution: To add to all CRUD operations: UPDATE ... JOIN entity e ... WHERE e.active=TRUE My second solution: Before all CRUD operations check if entity is active: if (getEntity(someId) != null) { //do some CRUD } In getEntity there's just SELECT * FROM entity WHERE id=? AND active=TRUE. Or any other solution, recommendation,...?

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • SQL query: how to translate IN() into a JOIN?

    - by tangens
    I have a lot of SQL queries like this: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o WHERE o.Id IN ( SELECT DISTINCT Id FROM table1, table2, table3 WHERE ... ) These queries have to run on different database engines (MySql, Oracle, DB2, MS-Sql, Hypersonic), so I can only use common SQL syntax. Here I read, that with MySql the IN statement isn't optimized and it's really slow, so I want to switch this into a JOIN. I tried: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o, table2, table3 WHERE ... But this does not take into account the DISTINCT keyword. Question: How do I get rid of the duplicate rows using the JOIN approach?

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • In .NET which loop runs faster for or foreach

    - by Binoj Antony
    In c#/VB.NET/.NET which loop runs faster for or foreach? Ever since I read that for loop works faster than foreach a long time ago I assumed it stood true for all collections, generic collection all arrays etc. I scoured google and found few articles but most of them are inconclusive (read comments on the articles) and open ended. What would be ideal is to have each scenarios listed and the best solution for the same e.g: (just example of how it should be) for iterating an array of 1000+ strings - for is better than foreach for iterating over IList (non generic) strings - foreach is better than for Few references found on the web for the same: Original grand old article by Emmanuel Schanzer CodeProject FOREACH Vs. FOR Blog - To foreach or not to foreach that is the question asp.net forum - NET 1.1 C# for vs foreach [Edit] Apart from the readability aspect of it I am really interested in facts and figures, there are applications where the last mile of performance optimization squeezed do matter.

    Read the article

  • DataReader or DataSet when pulling multiple recordsets in ASP.NET

    - by Gern Blandston
    I've got an ASP.NET page that has a bunch of controls that need to be populated (e.g. dropdown lists). I'd like to make a single trip to the db and bring back multiple recordsets instead of making a round-trip for each control. I could bring back multiple tables in a DataSet, or I could bring back a DataReader and use '.NextResult' to put each result set into a custom business class. Will I likely see a big enough performance advantage using the DataReader approach, or should I just use the DataSet approach? Any examples of how you usually handle this would be appreciated.

    Read the article

  • Managing StringBuilder Resources in C#

    - by Jim Fell
    Hello. My C# (.NET 2.0) application has a StringBuilder variable with a capacity of 2.5MB. Obviously, I do not want to copy such a large buffer to a larger buffer space every time it fills. By that point, there is so much data in the buffer anyways, removing the older data is a viable option. Can anyone see any obvious problems with how I'm doing this (i.e. am I introducing more performance problems than I'm solving), or does it look okay? tText_c = new StringBuilder(2500000, 2500000); private void AppendToText(string text) { if (tText_c.Length * 100 / tText_c.Capacity > 95) { tText_c.Remove(0, tText_c.Length / 2); } tText_c.Append(text); } Thanks.

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Is this a valid benefit of using embedded SQL over stored procedures?

    - by George
    Here's an argument for SPs that I haven't heard. Flamers, be gentle with the down tick, Since there is overhead associated with each trip to the database server, I would suggest that a POSSIBLE reason for placing your SQL in SPs over embedded code is that you are more insulated to change without taking a performance hit. For example. Let's say you need to perform Query A that returns a scalar integer. Then, later, the requirements change and you decide that it the results of the scalar is x that then, and only then, you need to perform another query. If you performed the first query in a SP, you could easily check the result of the first query and conditionally execute the 2nd SQL in the same SP. How would you do this efficiently in embedded SQL w/o perform a separate query or an unnecessary query? Here's an example: --This SP may return 1 or two queries. SELECT @CustCount = COUNT(*) FROM CUSTOMER IF @CustCount 10 SELECT * FROM PRODUCT Can this/what is the best way to do this in embedded SQL?

    Read the article

  • Django: How/Where to store a value for a session without unnecessary DB hits

    - by GerardJP
    Hi all, I have an extended userprofile with AUTH_PROFILE_MODULE (ref: http://tinyurl.com/yhracqq) I would like to set a user.is_guru() method similar to user.is_active(). This would results for al views (or rather templates) to e.g. disable/enable certain user messages, displaying of widgets, etc. The boolean is stored in the extended user profile model, but I want to avoid hitting the DB for every view. So the questions is .. Do I use a context_processor, a template tag, session_dict or what have you to, possible cached, store this info for the duration of the users visit. Note: I dont have performance issues, so it's definitely filed under premature optimization. I just want to avoid generating extra work in the future :). Any pointers are very welcome. Thanx and greetz! Gerard.

    Read the article

  • in java, which is better - three arrays of booleans or 1 array of bytes?

    - by joe_shmoe
    I know the question sounds silly, but consider this: I have an array of items and a labelling algorithm. at any point the item is in one of three states. The current version holds these states in a byte array, where 0, 1 and 2 represent the three states. alternatively, I could have three arrays of boolean - one for each state. which is better (consumes less memory) depends on how jvm (sun's version) stores the arrays - is a boolean represented by 1 bit? (p.s. don't start with all that "this is not the way OO/Java works" - I know, but here performance comes in front. plus the algorithm is simple and perfectly readable even in such form). Thanks a lot

    Read the article

  • How to profile object creation in Java?

    - by gooli
    The system I work with is creating a whole lot of objects and garbage collecting them all the time which results in a very steeply jagged graph of heap consumption. I would like to know which objects are being generated to tune the code, but I can't figure out a way to dump the heap at the moment the garbage collection starts. When I tried to initiate dumpHeap via JConsole manually at random times, I always got results after GC finished its run, and didn't get any useful data. Any notes on how to track down excessive temporary object creation are welcome.

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • Use of private constructor to prevent instantiation of class?

    - by cringe
    Hi guys! Right now I'm thinking about adding a private constructor to a class that only holds some String constants. public class MyStrings { // I want to add this: private MyString() {} public static final String ONE = "something"; public static final String TWO = "another"; ... } Is there any performance or memory overhead if I add a private constructor to this class to prevent someone to instantiate it? Do you think it's necessary at all or that private constructors for this purpose are a waste of time and code clutter?

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >