Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 192/1148 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • SQL wont work? It doesn't come up with errors either

    - by Stefan
    Hey there, I have php function which checks to see if variables are set and then adds them onto my sql query. However I am don't seem to be getting any results back!? $where_array = array(); if (array_key_exists("location", $_GET)) { $location = addslashes($_GET['location']); $where_array[] = "`mainID` = '".$location."'"; } if (array_key_exists("gender", $_GET)) { $gender = addslashes($_GET["gender"]); $where_array[] = "`gender` = '".$gender."'"; } if (array_key_exists("hair", $_GET)) { $hair = addslashes($_GET["hair"]); $where_array[] = "`hair` = '".$hair."'"; } if (array_key_exists("area", $_GET)) { $area = addslashes($_GET["area"]); $where_array[] = "`locationID` = '".$area."'"; } $where_expr = ''; if ($where_array) { $where_expr = "WHERE " . implode(" AND ", $where_array); } $sql = "SELECT `postID` FROM `posts` ". $where_expr; $dbi = new db(); $result = $dbi->query($sql); $r = mysql_fetch_row($result); I'm trying to call the data after in a list like so: $dbi = new db(); $offset = ($currentpage - 1) * $rowsperpage; // get the info from the db $sql .= " ORDER BY `time` DESC LIMIT $offset, $rowsperpage"; $result = $dbi->query($sql); // while there are rows to be fetched... while ($row = mysql_fetch_object($result)){ // echo data echo $row['text']; } // end while Anyone got any ideas why I am not retrieving any data? -Stefan

    Read the article

  • Array performance question

    - by Konrad
    I am very familiar with STL vector (and other container) performance guarantees, however I can't seem to find anything concrete about plain arrays. Are pointer arithmetic and [] methods constant or linear time?

    Read the article

  • magento category tree menu query being called on every page

    - by user1173309
    I have this query being called on every page in Magento CE 1.6.2, to find out where is it being called from, I have disabled all modules, removed the customizations done but it's still being called, this query is slowing up the page loading time and am at my wit's end trying to find out how can I stop it from being executed. The query is given below, for simplcitiy, I have removed lot of category id' to keep the sql short, it would be great if I could get solutions or hints to stop this query being called. SELECT e.*, IF(at_is_active.value_id 0, at_is_active.value, at_is_active_default.value) AS is_active, IF(at_include_in_menu.value_id 0, at_include_in_menu.value, at_include_in_menu_default.value) AS include_in_menu, core_url_rewrite.request_path FROM catalog_category_entity AS e INNER JOIN catalog_category_entity_int AS at_is_active_default ON (at_is_active_default.entity_id = e.entity_id) AND (at_is_active_default.attribute_id = '119') AND at_is_active_default.store_id = 0 LEFT JOIN catalog_category_entity_int AS at_is_active ON (at_is_active.entity_id = e.entity_id) AND (at_is_active.attribute_id = '119') AND (at_is_active.store_id = 1) INNER JOIN catalog_category_entity_int AS at_include_in_menu_default ON (at_include_in_menu_default.entity_id = e.entity_id) AND (at_include_in_menu_default.attribute_id = '934') AND at_include_in_menu_default.store_id = 0 LEFT JOIN catalog_category_entity_int AS at_include_in_menu ON (at_include_in_menu.entity_id = e.entity_id) AND (at_include_in_menu.attribute_id = '934') AND (at_include_in_menu.store_id = 1) LEFT JOIN core_url_rewrite ON (core_url_rewrite.category_id=e.entity_id) AND (core_url_rewrite.is_system=1 AND core_url_rewrite.product_id IS NULL AND core_url_rewrite.store_id='1' AND id_path LIKE 'category/%') WHERE (e.entity_type_id = '9') AND (e.entity_id IN('105', '125', '284', '285', '286', '288', '289', '185', '463', '464', '465', '625')) AND (e.entity_id NOT IN('140', '145', '530', '531', '775')) AND (IF(at_is_active.value_id 0, at_is_active.value, at_is_active_default.value) = '1') AND (IF(at_include_in_menu.value_id 0, at_include_in_menu.value, at_include_in_menu_default.value) = '1') Cheers Arjun

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • how to use a list containing the query result with hibernate and display it in jsp

    - by kawtousse
    hi everyone, In my servlet I construct the query like the following: net.sf.hibernate.Session s = null; net.sf.hibernate.Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select opcemployees.Nom,opcemployees.Prenom,dailytimesheet.TrackingDate,dailytimesheet.Activity," + "dailytimesheet.ProjectCode,dailytimesheet.WAName,dailytimesheet.TaskCode," + "dailytimesheet.TimeSpent,dailytimesheet.PercentTaskComplete from Opcemployees opcemployees,Dailytimesheet dailytimesheet " + "where opcemployees.Matricule=dailytimesheet.Matricule and dailytimesheet.Etat=3 " + "group by opcemployees.Nom,opcemployees.Prenom" ); List opdts= query.list(); request.setAttribute("items", opdts); request.getRequestDispatcher("EspaceValidation.jsp").forward(request, response); } catch (HibernateException e){ e.printStackTrace();} But in the jsp I can't display the result correctly. I do the following in JSP: <table> <c:forEach items="${items}" var="item"> <tr> <td>${item}</td> </tr> </c:forEach> </table> thanks for help.

    Read the article

  • XSLT for-each from query results

    - by Ben Record
    I've been on a hunt for a while trying to find a solution to this but I cannot find anywhere that addresses this problem. I'm running a SQL query through XSLT which will return three rows. Here is the query: <query name="OrderedProductNames" rowElementName ="OrderedItem"> <sql> <![CDATA[ select OrderedProductName, Quantity from Orders_ShoppingCart where OrderNumber = 101689 // Hard coded order number for testing purposes. ]]> </sql> </query> Ideally, I would like to iterate through each row returned and do a choose when which is tested on a variable from the current row being inspected in the for-each loop, but I am not entirely sure thhis is possible. My secondary thought would be to use the for-each loop as a way to inject hidden HTML input elements with the values I would need, then I could write a javascript function to complete what I'm trying to do. Any suggestion on how I would go about completing either task would be greatly appreciated.

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • How much does Javascript garbage collection affect performance?

    - by Long Ouyang
    I'm writing a bunch of scripts that present images serially (e.g. 1 per second) and require the user to make either a keyboard or mouse response. I'm using closures to handle the timing of image presentation and user input. This causes garbage collection to happen pretty frequently and I'm wondering if that will affect the performance (viz. timing of image presentation).

    Read the article

  • LINQ : How to query how to sort result by most similarity/equality

    - by aNui
    I want to do a search for Music instruments which has its informations Name, Category and Origin as I asked in my post. But now I want to sort/group the result by similarity/equality to the keyword such as. If I have the list { Harp, Piano, Drum, Guitar, Guitarrón } and if I queried "p" the result should be { Piano, Harp } but it shows Harp first because of the list's sequence and if I add {Grand Piano} to the list and query "piano" the result shoud be like { Piano, Grand Piano } here's my code static IEnumerable<MInstrument> InstrumentsSearch(IEnumerable<MInstrument> InstrumentsList, string query, MInstrument.Category[] SelectedCategories, MInstrument.Origin[] SelectedOrigins) { var result = InstrumentsList .Where(item => SelectedCategories.Contains(item.category)) .Where(item => SelectedOrigins.Contains(item.origin)) .Where(item => { if ( (" " + item.Name.ToLower()).Contains(" " + query.ToLower()) || item.Name.IndexOf(query) != -1 ) { return true; } return false; } ) .Take(30); return result.ToList<MInstrument>(); } Or the result may be like my old self-invented algorithm that I called "by order of occurence", that is just OK to me. Is there any way to do that, please tell me. Thanks in advance.

    Read the article

  • Seam app with JBoss 'minimal' Config?

    - by Shadowman
    I'd like to improve the performance of my Seam apps and JBoss appserver, particularly by removing things that aren't necessary in the standard configuration. Ideally, I'd like to be able to run it using the "minimal" profile. Can anyone give me any guidance as to what is needed to run a Seam app using "minimal"? Here are the kind of things my app requires: JPA, using Hibernate with a PostgreSQL backend EJB3 JSF (RichFaces/Facelets) E-mail, eventually, although not required at this particular moment I'll be developing my app using JBoss Tools on Eclipse, so I would also need anything that is required by the tools for development and deployment. I've found that the default configuration just has too many additional components and features installed by default, and that greatly affects performance when I'm trying to develop. Any help you can give would be great! Thanks!

    Read the article

  • need help optimizing oracle query

    - by deming
    I need help in optimizing the following query. It is taking a long time to finish. It takes almost 213 seconds . because of some constraints, I can not add an index and have to live with existing ones. INSERT INTO temp_table_1 ( USER_ID, role_id, participant_code, status_id ) WITH A AS (SELECT USER_ID user_id,ROLE_ID, STATUS_ID,participant_code FROM USER_ROLE WHERE participant_code IS NOT NULL), --1 B AS (SELECT ROLE_ID FROM CMP_ROLE WHERE GROUP_ID = 3), C AS (SELECT USER_ID FROM USER) --2 SELECT USER_ID,ROLE_ID,PARTICIPANT_CODE,MAX(STATUS_ID) FROM A INNER JOIN B USING (ROLE_ID) INNER JOIN C USING (USER_ID) GROUP BY USER_ID,role_id,participant_code ; --1 = query when ran alone takes 100+ seconds --2 = query when ran alone takes 19 seconds DELETE temp_table_1 WHERE ROWID NOT IN ( SELECT a.ROWID FROM temp_table_1 a, USER_ROLE b WHERE a.status_id = b.status_id AND ( b.ACTIVE IN ( 1 ) OR ( b.ACTIVE IN ( 0,3 ) AND SYSDATE BETWEEN b.effective_from_date AND b.effective_to_date )) ); It seems like the person who wrote the query is trying to get everything into a temp table first and then deleting records from the temp table. whatever is left is the actual results. Can't it be done such a way that there is no need for the delete? We just get the results needed since that will save time?

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • Access VBA question: Change the query being referenced by a function, depending on context

    - by Tara Amatista
    I have a custom function in Access2007 that hinges on grabbing data out of a specific query. It opens Outlook, creates a new email and populates the fields with specific addresses and data taken from the query ("DecisionEmail"). Now I want to make a different query ("RequestEmail") and have it populate the email with that data. So all I have to do is change this line: Set MailList = db.OpenRecordset("DecisionEmail") and that's where I get stumped. This is my desired result: If the user is on Form_Decision and clicks the button "Send email", "DecisionEmail" will get plugged into the function and that data will appear in the email. If the user on Form_SendRequest and clicks the button "Send email", "RequestEmail" will instead get plugged in. The reason that these are different queries is because they contain very different information that is smudged about in different ways. However, since it's just one little thing that needs to change in the function code, I don't think a brand new function is a good idea. My last resort would be to make a brand new function and use the Conditions field in the Macro interface to choose between them, but I have a feeling there's a more elegant solution possible. I have a vague notion of setting the query names as variables and using an If statement but I just don't have the mental vocabulary for thinking through this.

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • SQL Server Performance

    - by khan24
    I have tables in which 35000 to 40000 records are available. Inspite using ajax the performance of the website is very low. Can any body please suggest some ideas or tips for the problem. Thanks in advance.

    Read the article

  • XUL: Printing value from SQL query without using value attribute

    - by Grip
    I want to print the results of an SQL query in XUL with word-wrapping using the description tag. I have the following code: <grid> <columns> <column flex="2" /> <column flex="1" /> </columns> <rows datasources="chrome://glossary/content/db/development.sqlite3" ref="?" querytype="storage" > <template> <query>select distinct * from Terms</query> <action> <row uri="?"> <description value="?name" /> <description value="?desc" /> </row> </action> </template> </rows> This works for printing the data, but since I'm using the value attribute of description, there's no word-wrapping. I don't understand entirely what I'm doing with the value attribute, but I don't seem to be able to get the values of my SQL query in any other way. So, how could I get those values without the value tag? Thanks.

    Read the article

  • O(log N) == O(1) - Why not?

    - by phoku
    Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for all practical purposes. I am really curious for real world examples where this doesn't hold. To clarify: I understand O(f(N)) I am curious about real world examples where the asymptotic behaviour matters more than the constants of the actual performance. If log(N) can be replaced by a constant it still can be replaced by a constant in O( N log N). This question is for the sake of (a) entertainment and (b) to gather arguments to use if I run (again) into a controversy about the performance of a design.

    Read the article

  • Can I use memcpy in C++ to copy classes that have no pointers or virtual functions

    - by Shane MacLaughlin
    Say I have a class, something like the following; class MyClass { public: MyClass(); int a,b,c; double x,y,z; }; #define PageSize 1000000 MyClass Array1[PageSize],Array2[PageSize]; If my class has not pointers or virtual methods, is it safe to use the following? memcpy(Array1,Array2,PageSize*sizeof(MyClass)); The reason I ask, is that I'm dealing with very large collections of paged data, as decribed here, where performance is critical, and memcpy offers significant performance advantages over iterative assignment. I suspect it should be ok, as the 'this' pointer is an implicit parameter rather than anything stored, but are there any other hidden nasties I should be aware of?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >