Search Results

Search found 90546 results on 3622 pages for 'code optimization'.

Page 160/3622 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • Improving MySQL Update Query Efficiency

    - by Russell C.
    In our database tables we keep a number of counting columns to help reduce the number of simple lookup queries. For example, in our users table we have columns for the number of reviews written, photos uploaded, friends, followers, etc. To help make sure these stay in sync we have a script that runs periodically to check and update these counting columns. The problem is that now that our database has grown significantly the queries we have been using are taking forever to run since they are totally inefficient. I would appreciate someone with more MySQL knowledge than myself to recommend how we can improve it's efficiency: update users set photos=(select count(*) from photos where photos.status="A" AND photos.user_id=users.id) where users.status="A"; If this were a select statement I would just use a join but I'm not sure if that is possible with update. Thanks in advance for your help!

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • Why isn't the copy constructor elided here?

    - by Jesse Beder
    (I'm using gcc with -O2.) This seems like a straightforward opportunity to elide the copy constructor, since there are no side-effects to accessing the value of a field in a bar's copy of a foo; but the copy constructor is called, since I get the output meep meep!. #include <iostream> struct foo { foo(): a(5) { } foo(const foo& f): a(f.a) { std::cout << "meep meep!\n"; } int a; }; struct bar { foo F() const { return f; } foo f; }; int main() { bar b; int a = b.F().a; return 0; }

    Read the article

  • Specific template for the first element.

    - by Kalinin
    I have a template: <xsl:template match="paragraph"> ... </xsl:template> I call it: <xsl:apply-templates select="paragraph"/> For the first element I need to do: <xsl:template match="paragraph[1]"> ... <xsl:apply-templates select="."/><!-- I understand that this does not work --> ... </xsl:template> How to call <xsl:apply-templates select="paragraph"/> (for the first element paragraph) from the template <xsl:template match="paragraph[1]">? So far that I have something like a loop. I solve this problem so (but I do not like it): <xsl:for-each select="paragraph"> <xsl:choose> <xsl:when test="position() = 1"> ... <xsl:apply-templates select="."/> ... </xsl:when> <xsl:otherwise> <xsl:apply-templates select="."/> </xsl:otherwise> </xsl:choose> </xsl:for-each>

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Counting context switches per thread

    - by Sarmun
    Is there a way to see how many context switches each thread generates? (both in and out if possible) Either in X/s, or to let it run and give aggregated data after some time. (either on linux or on windows) I have found only tools that give aggregated context-switching number for whole os or per process. My program makes many context switches (50k/s), probably a lot not necessary, but I am not sure where to start optimizing, where do most of those happen.

    Read the article

  • optimize a string.Format + replace.

    - by acidzombie24
    I have this function. The visual studio profile marked the line with string.Format as hot and were i spend much of my time. How can i write this loop more efficiently? public string EscapeNoPredicate(string sz) { var s = new StringBuilder(sz); s.Replace(sepStr, sepStr + sepStr); foreach (char v in IllegalChars) { string s2 = string.Format("{0}{1:X2}", seperator, (Int16)v); s.Replace(v.ToString(), s2); } return s.ToString(); }

    Read the article

  • Using Custom Generic Collection faster with objects than List

    - by Kaminari
    I'm iterating through a List<> to find a matching element. The problem is that object has only 2 significant values, Name and Link (both strings), but has some other values which I don't want to compare. I'm thinking about using something like HashSet (which is exactly what I'm searching for -- fast) from .NET 3.5 but target framework has to be 2.0. There is something called Power Collections here: http://powercollections.codeplex.com/, should I use that? But maybe there is other way? If not, can you suggest me a suitable custom collection?

    Read the article

  • Anything wrong with this MySQL quert? takes 10 seconds+ to load

    - by user345426
    I have a search that is taking 10 seconds+ to execute! Keep in mind it is also searching over 200,000 products in the database. I posted the explain and MySQL query here. 1 SIMPLE p ref PRIMARY,products_status,prod_prodid_status,product... products_status 1 const 9048 Using where; Using temporary; Using filesort 1 SIMPLE v ref PRIMARY,vendors_id,vendors_vendorid vendors_vendorid 4 rhinomar_rhinomartnew.p.vendors_id 1 1 SIMPLE s ref products_id products_id 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE pd ref PRIMARY,products,prod_desc_prodid_prodname prod_desc_prodid_prodname 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE p2c ref PRIMARY,ptc_catidx PRIMARY 4 rhinomar_rhinomartnew.p.products_id 1 Using where; Using index 1 SIMPLE c eq_ref PRIMARY PRIMARY 4 rhinomar_rhinomartnew.p2c.categories_id 1 Using where MySQL Query: select p.products_id, p.products_image, p.products_price, p.products_weight, p.products_unit_quantity, s.specials_new_products_price, s.status, pd.products_name, pd.products_img_alt from products p left join vendors v ON v.vendors_id = p.vendors_id left join specials s on s.products_id = p.products_id left join products_description pd on pd.products_id = p.products_id left join products_to_categories p2c on p2c.products_id = p.products_id left join categories c on c.categories_id = p2c.categories_id where ( ( pd.products_name like '%apparel%' ) or p2c.categories_id IN (773, 132, 135, 136, 119, 122, 124, 125, 126, 1749, 1753, 1747, 123, 127, 130, 131, 178, 137, 140, 164, 165, 166, 167, 168, 169, 832, 2045 ) or p.products_id = 'apparel' or p.products_model = 'apparel' or CONCAT(v.vendors_prefix, '-') = 'apparel' or CONCAT( v.vendors_prefix, '-', p.products_id ) = 'apparel' ) and p.products_status = '1' and c.categories_status = '1' group by p.products_id order by pd.products_name

    Read the article

  • Will an optimizing compiler remove calls to a method whose result will be multiplied by zero?

    - by Tim R.
    Suppose you have a computationally expensive method, Compute(p), which returns some float, and another method, Falloff(p), which returns another float from zero to one. If you compute Falloff(p) * Compute(p), will Compute(p) still run when Falloff(p) returns zero? Or would you need to write a special case to prevent Compute(p) from running unnecessarily? Theoretically, an optimizing compiler could determine that omitting Compute when Falloff returns zero would have no effect on the program. However, this is kind of hard to test, since if you have Compute output some debug data to determine whether it is running, the compiler would know not to omit it because of that debug info, resulting in sort of a Schrodinger's cat situation. I know the safe solution to this problem is just to add the special case, but I'm just curious.

    Read the article

  • Why this query is so slow?

    - by Silver Light
    This query appears in mysql slow query log: it takes 11 seconds. INSERT INTO record_visits ( record_id, visit_day ) VALUES ( '567', NOW() ); The table has 501043 records and it's structure looks like this: CREATE TABLE IF NOT EXISTS `record_visits` ( `id` int(11) NOT NULL AUTO_INCREMENT, `record_id` int(11) DEFAULT NULL, `visit_day` date DEFAULT NULL, `visit_cnt` bigint(20) DEFAULT '1', PRIMARY KEY (`id`), UNIQUE KEY `record_id_visit_day` (`record_id`,`visit_day`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ; What could be wrong? Why this INSERT takes so long?

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • SimpleDB as Denormalized DB

    - by Max
    In an environment where you have a relational database which handles all business transactions is it a good idea to utilise SimpleDB for all data queries to have faster and more lightweight search? So the master data storage would be a relational DB which is "replicated"/"transformed" into SimpleDB to provide very fast read only queries since no JOINS and complicated subselects are needed.

    Read the article

  • Optimizing a "set in a string list" to a "set as a matrix" operation

    - by Eric Fournier
    I have a set of strings which contain space-separated elements. I want to build a matrix which will tell me which elements were part of which strings. For example: "" "A B C" "D" "B D" Should give something like: A B C D 1 2 1 1 1 3 1 4 1 1 Now I've got a solution, but it runs slow as molasse, and I've run out of ideas on how to make it faster: reverseIn <- function(vector, value) { return(value %in% vector) } buildCategoryMatrix <- function(valueVector) { allClasses <- c() for(classVec in unique(valueVector)) { allClasses <- unique(c(allClasses, strsplit(classVec, " ", fixed=TRUE)[[1]])) } resMatrix <- matrix(ncol=0, nrow=length(valueVector)) splitValues <- strsplit(valueVector, " ", fixed=TRUE) for(cat in allClasses) { if(cat=="") { catIsPart <- (valueVector == "") } else { catIsPart <- sapply(splitValues, reverseIn, cat) } resMatrix <- cbind(resMatrix, catIsPart) } colnames(resMatrix) <- allClasses return(resMatrix) } Profiling the function gives me this: $by.self self.time self.pct total.time total.pct "match" 31.20 34.74 31.24 34.79 "FUN" 30.26 33.70 74.30 82.74 "lapply" 13.56 15.10 87.86 97.84 "%in%" 12.92 14.39 44.10 49.11 So my actual questions would be: - Where are the 33% spent in "FUN" coming from? - Would there be any way to speed up the %in% call? I tried turning the strings into factors prior to going into the loop so that I'd be matching numbers instead of strings, but that actually makes R crash. I've also tried going for partial matrix assignment (IE, resMatrix[i,x] <- 1) where i is the number of the string and x is the vector of factors. No dice there either, as it seems to keep on running infinitely.

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >