Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 437/3203 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • Exporting Sqlite data from an Android Application

    - by meg18019
    My Android application stores all user data in a Sqlite database. What are my options to backup/export/import/copy this data? I know I can easily copy the database to the SD card. I would also like to send the data to a network server. Are there any packages/classes available to facilitate getting sqlite information to/from a network server? Thanks for the help...

    Read the article

  • Windows Workflow runs very slowlyh on my DEV machine

    - by Joon
    I am developing an app using WF hosted in IIS as WCF services as a business layer. This runs quickly on any machine running Windows Server 2008 R2, but very slowly on our dev machines, running Windows XP SP3. Yesterday, the workflows were as fast on my dev machine as they are on the server for the whole day. Today, they are back to running slowly again (I rebooted overnight) Has anyone else experienced this problem with workflows running slowly on IIS in XP? What did you do to fix it?

    Read the article

  • Is Java serialization a tool to shrink the memory footprint?

    - by Pentius
    Hey folks, does serialization in Java always have to shrink the memory that is used to hold an object structure? Or is it likely that serialization will have higher costs? In other words: Is serialization a tool to shrink the memory footprint of object structures in Java? Edit I'm totally aware of what serialization was intended for, but thanks anyway :-) But you know, tools can be misused. My question is, whether it is a good tool to decrease the memory usage. So what reasons can you imagine, why memory usage should increase/decrease? What will happen in most cases?

    Read the article

  • Cuboid inside generic polyhedron

    - by DOFHandler
    I am searching for an efficient algorithm to find if a cuboid is completely inside or completely outside or (not-inside and not-outside) a generic (concave or convex) polyhedron. The polyhedron is defined by a list of 3D points and a list of facets. Each facet is defined by the subset of the contour points ordinated such as the right-hand normal points outward the solid. Any suggestion? Thank you

    Read the article

  • PageableListView Not rendering my data as required

    - by Robin
    i am working on wicket, where i am supposed to show my data's under <tr> <td>Name</td> <td>Single Player Score</td> <td>Double Player Score</td> <td>Total Score</td> </tr> <tr wicket:id="data"> <td wicket:id="name"></td> <td wicket:id="singlePlayerScore"></td> <td wicket:id="doublePlayerScore"></td> <td wicket:id="totalScore"></td> </tr> My Player model class is as: Player class with attributes singlePlayerScore, doublePlayerScore(), name with getter and setter and also a list data obtained from database. Data from SQLQuery is as; name score gamemode A 200 singlePlayerMode A 100 doublePLayerMode B 400 singlePlayerMode B 300 doublePLayerMode dataList == player.getScoreList(); My PageableListView is as: final PageableListView listView = new PageableListView("data",dataList,10){ @Override protected void populateItem(Item item){ player = (Player)item.getModelObject(); item.add(Label("name",player.getName())); item.add(Label("singlePlayerScore",player.getName())); item.add(Label("doublePlayerScore",player.getName())); item.add(Label("totalScore",String.valueOf(player.getSinglePlayerScore()+player.getDoublePlayerScore()))); } } My Problem is as: What view i get is as: Name single Player Score Double Player Score Total Score A 0 100 100 A 200 0 200 B 0 300 300 B 400 0 400 How do i achieve below view on my webpage? Name single Player Score Double Player Score Total Score A 200 100 300 B 400 300 700 Please help me as to why is this happening? I guess my list has size four that's one reason why as to it is rendering the view? So what can i do to get as require rendering view?

    Read the article

  • Mysql retrieve polygon data

    - by dskanth
    Hi, i have been developing a site that stores spatial data in mysql database, like that of buildings, gardens, etc. in the form of polygons (latitudes and longitudes). I want to know how to retrieve polygon data in mysql. I have seen this sample query to insert a polygon data: http://amper.110mb.com/SPAT/mysql_initgeometry2.htm But now i want to know how to retrieve data from the table, based on certain constraints like: "where latitude < 9.33 and longitude > 22.4" Also how do i find whether a point lies inside or outside of a polygon

    Read the article

  • NetNamedPipe: varying response time when communication is idling

    - by Sven Künzler
    I have two WCF apps communicating one-way over named pipes. All is nice, except for one thing: Normally, the request/response cycle takes zero (marginal) time. However, if there was a time span of, say, half a minute without any communication, the request/response increases up to ~300-500ms. I looked around the net and I got the idea of using a heart beat/ping mechanism to keep the communication channel busy. Using trial and error I found that when doing a request each 10 seconds, the response times stay low. Starting at around 15s intervals, the "hiccup" response times begin to appear. Now I'm wondering where this phenomenon is originating from. I tried setting alle conceivable timeouts on both sides to 1 minute, but that did not help. Can anybody explain what's going on there?

    Read the article

  • do.call(rbind, list) for uneven number of column

    - by h.l.m
    I have a list, with each element being a character vector, of differing lengths I would like to bind the data as rows, so that the column names 'line up' and if there is extra data then create column and if there is missing data then create NAs Below is a mock example of the data I am working with x <- list() x[[1]] <- letters[seq(2,20,by=2)] names(x[[1]]) <- LETTERS[c(1:length(x[[1]]))] x[[2]] <- letters[seq(3,20, by=3)] names(x[[2]]) <- LETTERS[seq(3,20, by=3)] x[[3]] <- letters[seq(4,20, by=4)] names(x[[3]]) <- LETTERS[seq(4,20, by=4)] The below line would normally be what I would do if I was sure that the format for each element was the same... do.call(rbind,x) I was hoping that someone had come up with a nice little solution that matches up the column names and fills in blanks with NAs whilst adding new columns if in the binding process new columns are found...

    Read the article

  • large test data for knapsack problem

    - by user347918
    i am researcher student. I am searching large data for knapsack problem. I wanted test my algorithm for knapsack problem. But i couldn't find large data. I need data has 1000 item and capacity is no matter. The point is item as much as huge it's good for my algorithm. Is there any huge data available in internet. Does anybody know please guys i need urgent.

    Read the article

  • Better understanding of my SQL transactions

    - by Slew Poke
    I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware. Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

    Read the article

  • How expensive is a context switch? Is it better to implement a manual task switch than to rely on OS

    - by Vilx-
    The title says it all. Imagine I have two (three, four, whatever) tasks that have to run in parallel. Now, the easy way to do this would be to create separate threads and forget about it. But on a plain old single-core CPU that would mean a lot of context switching - and we all know that context switching is big, bad, slow, and generally simply Evil. It should be avoided, right? On that note, if I'm writing the software from ground up anyway, I could go the extra mile and implement my own task-switching. Split each task in parts, save the state inbetween, and then switch among them within a single thread. Or, if I detect that there are multiple CPU cores, I could just give each task to a separate thread and all would be well. The second solution does have the advantage of adapting to the number of available CPU cores, but will the manual task-switch really be faster than the one in the OS core? Especially if I'm trying to make the whole thing generic with a TaskManager and an ITask, etc?

    Read the article

  • Decide which caching startegy to use ?

    - by hib
    Hi all, I want to cache my loaded data so that I can reduce my application start time . I know several strategies to store application data i.e. core data, nsuserdefaults , archiving . Now my scenario is that suppose that I have array of maximum 10 objects each object having 5 fields . So I can not decide which strategy to store this array an later retrieving the same . Thanks .

    Read the article

  • About the String#substring() method

    - by alain.janinm
    If we take a look at the String#substring method implementation : new String(offset + beginIndex, endIndex - beginIndex, value); We see that a new String is created with the same original content (parameter char [] value). So the workaround is to use new String(toto.substring(...)) to drop the reference to the original char[] value and make it eligible for GC (if no more references exist). I would like to know if there is a special reason that explain this implementation. Why the method doesn't create herself the new shorter String and why she keeps the full original value instead? The other related question is : should we always use new String(...) when dealing with substring?

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

  • C++ code which is slower than its C equivalent?

    - by user997112
    Are there any aspects to the C++ programming language where the code is known to be slower than the equivalent C language? Obviously this would be excluding the OO features like virtual functions and vtable features etc. I am wondering whether, when you are programming in a latency-critical area (and you aren't worried about OO features) whether you could stick with basic C++ or would C be better?

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • MySQL Single Query Benchmarking Strategies

    - by Pepper
    Hello, I have a slow mySQL query in my application that I need to re-write. The problem is, it's only slow on my production server and only when it's not cached. The first time I run it, it will take 12 seconds, then anytime after that it'll be 500 milliseconds. Is there an easy way to test this query without it hitting the query cache so I can see the results of my refactoring? Thanks!

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • Pass data in np.dnarray to Highcharts

    - by F.N.B
    I'm working with python 2.7, jinja2, flask and Highcharts. I create two numpy array (x1 and x2, type = numpy.dnarray) and I pass to Highcharts. My problems is, Highcharts don't recognize the commas in the vector. This is my jinja2 code: <script> $(function () { $('#container').highcharts({ series: [{ name: 'Tokyo', data: {{ x1 }} }, { name: 'London', data: {{ x2 }} }] }); }); And this is the error that I look with network chrome dev tools: series: [{ name: 'Tokyo', data: [1 4 5 2 3] }, { name: 'London', data: [3 6 7 4 1] }] I need change the numpy array to python list to pass to Highcharts or there is a better way to do?? Thanks

    Read the article

  • DATA REQUEST IN SMALLER CHUNKS?

    - by Googler
    Hi folks, I have developed a windows services to retrieve all response data provided by the EPO webservices. While scrapping the data through internet, after few hours i recieve a error message as: Error: ** Please request bibliographic data in smaller chunks.** Here bibliographic data is one kind of the service provided by the EPO webservice. I hope there is no error with my inputs and the service provided. I dont know what this error means. Is it related to the internet connection or with my webservices? can anyone please help me on what this error actually mean?

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >