Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 218/837 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Sql query: use where in or foreach?

    - by phenevo
    Hi, I'm using query, where the piece is: ...where code in ('va1','var2'...') I have about 50k of this codes. It was working when I has 30k codes, but know I get: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partition I think that problem is related with IN... So now I'm planning use foreach(string code in codes) ...where code =code Is it good Idea ??

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Partitioning requests in code among several servers

    - by Jacques René Mesrine
    I have several forum servers (what they are is irrelevant) which stores posts from users and I want to be able to partition requests among these servers. I'm currently leaning towards partitioning them by geographic location. To improve the locality of data, users will be separated into regions e.g. North America, South America and so on. Is there any design pattern on how to implement the function that maps the partioning property to the server, so that this piece of code has high availability and would not become a single point of failure ? f( Region ) -> Server IP

    Read the article

  • How can I monitor the rendering time in a browser?

    - by adpd
    I work on an internal corporate system that has a web front-end as one of its interfaces. The web front-end is served up using Tomcat. How can I monitor the rendering time of specific pages in a browser (IE6)? I would like to be able to record the results in a log file (separate log file or the Tomcat access log).

    Read the article

  • Using VirtualMode on a DataGridView when the number of rows/columns isn't known

    - by Nathan Baulch
    I need to display an unknown length sequence of dictionaries with unknown keys efficiently in a data grid. This sequence is the result of a potentially slow LINQ query that could contain any number of results. At first I thought that VirtualMode on DataGridView was what I was looking for but it appears that the number of rows and columns must be known upfront. I tried adding a single row and column then adding more as needed from the CellValueNeeded event but this doesn't work. Is this even possible with VirtualMode? Or do I need to estimate how many rows are visible on the screen and manually build up the rows/columns? And if so, how do I ensure that a vertical scrollbar is present and react appropriately when a user uses it?

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • Scalable Database Tagging Schema

    - by Longpoke
    EDIT: To people building tagging systems. Don't read this. It is not what you are looking for. I asked this when I wasn't aware that RDBMS all have their own optimization methods, just use a simple many to many scheme. I have a posting system that has millions of posts. Each post can have an infinite number of tags associated with it. Users can create tags which have notes, date created, owner, etc. A tag is almost like a post itself, because people can post notes about the tag. Each tag association has an owner and date, so we can see who added the tag and when. My question is how can I implement this? It has to be fast searching posts by tag, or tags by post. Also, users can add tags to posts by typing the name into a field, kind of like the google search bar, it has to fill in the rest of the tag name for you. I have 3 solutions at the moment, but not sure which is the best, or if there is a better way. Note that I'm not showing the layout of notes since it will be trivial once I get a proper solution for tags. Method 1. Linked list tagId in post points to a linked list in tag_assoc, the application must traverse the list until flink=0 post: id, content, ownerId, date, tagId, notesId tag_assoc: id, tagId, ownerId, flink tag: id, name, notesId Method 2. Denormalization tags is simply a VARCHAR or TEXT field containing a tab delimited array of tagId:ownerId. It cannot be a fixed size. post: id, content, ownerId, date, tags, notesId tag: id, name, notesId Method 3. Toxi (from: http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html, also same thing here: http://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging) post: id, content, ownerId, date, notesId tag_assoc: ownerId, tagId, postId tag: id, name, notesId Method 3 raises the question, how fast will it be to iterate through every single row in tag_assoc? Methods 1 and 2 should be fast for returning tags by post, but for posts by tag, another lookup table must be made. The last thing I have to worry about is optimizing searching tags by name, I have not worked that out yet. I made an ASCII diagram here: http://pastebin.com/f1c4e0e53

    Read the article

  • Django: Update order attribute for objects in a queryset

    - by lazerscience
    I'm having a attribute on my model to allow the user to order the objects. I have to update the element's order depending on a list, that contains the object's ids in the new order; right now I'm iterating over the whole queryset and set one objects after the other. What would be the easiest/fastest way to do the same with the whole queryset? def update_ordering(model, order): """ order is in the form [id,id,id,id] for example: [8,4,5,1,3] """ id_to_order = dict((order[i], i) for i in range(len(order))) for x in model.objects.all(): x.order = id_to_order[x.id] x.save()

    Read the article

  • SQL Server query problem

    - by user335160
    I want to achieved the results shown in the attached image. The Table Structure and Data are the ffg below: Table Relationship Overall IB Limit->one to many-> Facility Limit Facility Limit->one to many-> Facility Sub Limit Tables Structure and Data Overall IB Limit Id SCAF Reference Approval Date 1 NEW-001 January 1, 2011 2 NEW-002 January 2, 2011 3 NEW-003 January 3, 2011 ---------------------------- Facility Limit Id OverallIBLimitId Product Type 1 1 RPA 2 1 CG 3 2 RPA 4 3 CG ---------------------------- Facility Sub Limit Id FacilityLimitId Sub-Limit Type Amount Tenor Status Status Date 1 1 RPA at max 2,000,0000.00 2 months Approved January 5, 2011 2 1 Oil 3,000,0000.00 3 yrs Approved January 5, 2011 3 2 CG at minor 4,000,0000.00 1 yr Approved January 5, 2011 4 2 CG at max 5,000,0000.00 6 months Approved January 5, 2011 5 2 Flood Component 1 5,000,0000.00 6 months Approved January 5, 2011 6 2 Flood Component 2 6,000,0000.00 3 yrs Approved January 5, 2011 7 3 RPA at minor 1,000,0000.00 6 months Approved January 5, 2011 8 4 One-Off 1,000,0000.00 6 months Approved January 5, 2011

    Read the article

  • Why are compilers so stupid?

    - by martinus
    I always wonder why compilers can't figure out simple things that are obvious to the human eye. They do lots of simple optimizations, but never something even a little bit complex. For example, this code takes about 6 seconds on my computer to print the value zero (using java 1.6): int x = 0; for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) { x += x + x + x + x + x; } System.out.println(x); It is totally obvious that x is never changed so no matter how often you add 0 to itself it stays zero. So the compiler could in theory replace this with System.out.println(0). Or even better, this takes 23 seconds: public int slow() { String s = "x"; for (int i = 0; i < 100000; ++i) { s += "x"; } return 10; } First the compiler could notice that I am actually creating a string s of 100000 "x" so it could automatically use s StringBuilder instead, or even better directly replace it with the resulting string as it is always the same. Second, It does not recognize that I do not actually use the string at all, so the whole loop could be discarded! Why, after so much manpower is going into fast compilers, are they still so relatively dumb? EDIT: Of course these are stupid examples that should never be used anywhere. But whenever I have to rewrite a beautiful and very readable code into something unreadable so that the compiler is happy and produces fast code, I wonder why compilers or some other automated tool can't do this work for me.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Should I go with Varnish instead of nginx?

    - by gotts
    I really like nginx. But recently I've found that varnish gives you an opportunity to implement smart caching revers proxy layer(with URL purging). I have a cluster of mongrels which are pretty resource-intensive so if this caching layer can remove some load from mongrels this can be a great thing. I didn't find a way to implement the caching layer(with for application pages; static content is cacheable of course) same with nginx.. Should I use Varnish instead? What would you recommend?

    Read the article

  • Prepending to a multi-gigabyte file.

    - by dafmetal
    What would be the most performant way to prepend a single character to a multi-gigabyte file (in my practical case, a 40GB file). There is no limitation on the implementation to do this. Meaning it can be through a tool, a shell script, a program in any programming language, ...

    Read the article

  • oprofile unable to produce call graph

    - by aaa
    hello I am trying to use oprofile to generate call graph. Compiler is g++, platform is linux x86-64, linker is gfortran C++ code is compiled with -fno- omit-frame-pointer. oprofile is started with --callgraph=25. report I run with --callgraph. the call graph is produced but it's only includes self time, which is not much use what am I missing?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • How to setup whole-disk encryption with dual boot on a MacBook Pro (generation 9,2 with 12.04)

    - by blueyed
    I can install Ubuntu 12.04 on the MacBook when using the "noapic" kernel boot option, using the alternate amd64+mac image (from http://cdimage.ubuntu.com/releases/12.04/release/ ). But after installation the screen turn sblack after trying to boot "Windows" (as named in the boot menu that shows up when holding Option/Alt during startup). I want to use whole-disk encryption and given that only one free partition is available, I have setup LVM to do so: - vg0 contains bootlv and cryptlv - in cryptlv I have setup encryption with another LVM volume group (vg1, which holds swaplv, rootlv and homelv) I have not installed Grub during installation (because I was not sure about the partition) and when trying to install it later on /dev/sda4 (which contains the outer LVM) it complained that it could not determine the file system, and --force did not help either. The black screen / behavior looks similar to starting the installer without enabling the noapic option.

    Read the article

  • PostgreSQL: Why does this simple query not use the index?

    - by David
    I have a table t with a column c, which is an int and has a btree index on it. Why does the following query not utilize this index? explain select c from t group by c; The result I get is: HashAggregate (cost=1005817.55..1005817.71 rows=16 width=4) -> Seq Scan on t (cost=0.00..946059.84 rows=23903084 width=4) My understanding of indexes is limited, but I thought such queries were the purpose of indexes.

    Read the article

  • Speed up math code in C# by writing a C dll?

    - by Projectile Fish
    I have a very large nested for loop in which some multiplications and additions are performed on floating point numbers. for (int i = 0; i < length1; i++) { s = GetS(i); c = GetC(i); for(int j = 0; j < length2; j++) { double oldU = u[j]; u[j] = c * oldU + s * omega[i][j]; omega[i][j] = c * omega[i][j] - s * oldU; } } This loop is taking up the majority of my processing time and is a bottleneck. Would I be likely to see any speed improvements if I rewrite this loop in C and interface to it from C#?

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >