Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 184/617 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Does the order of columns in a query matter?

    - by James Simpson
    When selecting columns from a MySQL table, is performance affected by the order that you select the columns as compared to their order in the table (not considering indexes that may cover the columns)? For example, you have a table with rows uid, name, bday, and you have the following query. SELECT uid, name, bday FROM table Does MySQL see the following query any differently and thus cause any sort of performance hit? SELECT uid, bday, name FROM table

    Read the article

  • Is this implementation truely tail-recursive?

    - by CFP
    Hello everyone! I've come up with the following code to compute in a tail-recursive way the result of an expression such as 3 4 * 1 + cos 8 * (aka 8*cos(1+(3*4))) The code is in OCaml. I'm using a list refto emulate a stack. type token = Num of float | Fun of (float->float) | Op of (float->float->float);; let pop l = let top = (List.hd !l) in l := List.tl (!l); top;; let push x l = l := (x::!l);; let empty l = (l = []);; let pile = ref [];; let eval data = let stack = ref data in let rec _eval cont = match (pop stack) with | Num(n) -> cont n; | Fun(f) -> _eval (fun x -> cont (f x)); | Op(op) -> _eval (fun x -> cont (op x (_eval (fun y->y)))); in _eval (fun x->x) ;; eval [Fun(fun x -> x**2.); Op(fun x y -> x+.y); Num(1.); Num(3.)];; I've used continuations to ensure tail-recursion, but since my stack implements some sort of a tree, and therefore provides quite a bad interface to what should be handled as a disjoint union type, the call to my function to evaluate the left branch with an identity continuation somehow irks a little. Yet it's working perfectly, but I have the feeling than in calling the _eval (fun y->y) bit, there must be something wrong happening, since it doesn't seem that this call can replace the previous one in the stack structure... Am I misunderstanding something here? I mean, I understand that with only the first call to _eval there wouldn't be any problem optimizing the calls, but here it seems to me that evaluation the _eval (fun y->y) will require to be stacked up, and therefore will fill the stack, possibly leading to an overflow... Thanks!

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

  • Where is the bottleneck in this code?

    - by Mikhail
    I have the following tight loop that makes up the serial bottle neck of my code. Ideally I would parallelize the function that calls this but that is not possible. //n is about 60 for (int k = 0;k < n;k++) { double fone = z[k*n+i+1]; double fzer = z[k*n+i]; z[k*n+i+1]= s*fzer+c*fone; z[k*n+i] = c*fzer-s*fone; } Are there any optimizations that can be made such as vectorization or some evil inline that can help this code? I am looking into finding eigen solutions of tridiagonal matrices. http://www.cimat.mx/~posada/OptDoglegGraph/DocLogisticDogleg/projects/adjustedrecipes/tqli.cpp.html

    Read the article

  • How to store static content across branches in a single location in version control

    - by Shravan
    [Just a random thought] I have a pdf doc that is downloaded when the user clicks on 'help' on my website. Now, this is a pretty huge document and is saved in version control (SVN) and is thus copied for all branches that exist in SVN. This is static content and something that developers are not working on, and does not change often. Is there a more efficient way to store it (that would not hamper local deployments) that would make SVN checkouts and updates relatively faster. I know the benefit we get is not huge, this is something that came to my head none the less.

    Read the article

  • unroll nested for loops in C++

    - by Hristo
    How would I unroll the following nested loops? for(k = begin; k != end; ++k) { for(j = 0; j < Emax; ++j) { for(i = 0; i < N; ++i) { if (j >= E[i]) continue; array[k] += foo(i, tr[k][i], ex[j][i]); } } } I tried the following, but my output isn't the same, and it should be: for(k = begin; k != end; ++k) { for(j = 0; j < Emax; ++j) { for(i = 0; i+4 < N; i+=4) { if (j >= E[i]) continue; array[k] += foo(i, tr[k][i], ex[j][i]); array[k] += foo(i+1, tr[k][i+1], ex[j][i+1]); array[k] += foo(i+2, tr[k][i+2], ex[j][i+2]); array[k] += foo(i+3, tr[k][i+3], ex[j][i+3]); } if (i < N) { for (; i < N; ++i) { if (j >= E[i]) continue; array[k] += foo(i, tr[k][i], ex[j][i]); } } } } I will be running this code in parallel using Intel's TBB so that it takes advantage of multiple cores. After this is finished running, another function prints out what is in array[] and right now, with my unrolling, the output isn't identical. Any help is appreciated. Thanks, Hristo

    Read the article

  • What does ER_WARN_FIELD_RESOLVED mean?

    - by VolkerK
    When SHOW WARNINGS after a EXPLAIN EXTENDED shows a Note 1276 Field or reference 'test.foo.bar' of SELECT #2 was resolved in SELECT #1 what exactly does that mean and what impact does it have? In my case it prevents mysql from using what seems to be a perfectly good index. But it's not about fixing that specific query (as it is an irrelevant test). I found http://dev.mysql.com/doc/refman/5.0/en/error-messages-server.html butError: 1276 SQLSTATE: HY000 (ER_WARN_FIELD_RESOLVED) Message: Field or reference '%s%s%s%s%s' of SELECT #%d was resolved in SELECT #%d isn't much of an explaination.

    Read the article

  • Nodes set of the same type with if-test. Make it less.

    - by Kalinin
    How to make the code more beautiful (compact)? <xsl:template match="part"> <table class="part"> <xsl:if test="name != ''"> <tr> <td>????????</td><td><xsl:value-of select="name"/></td> </tr> </xsl:if> <xsl:if test="model != ''"> <tr> <td>??????</td><td><xsl:value-of select="model"/></td> </tr> </xsl:if> <xsl:if test="year != ''"> <tr> <td>???</td><td><xsl:value-of select="year"/></td> </tr> </xsl:if> <xsl:if test="glass_type != ''"> <tr> <td>???</td><td><xsl:value-of select="glass_type"/></td> </tr> </xsl:if> <xsl:if test="scancode != ''"> <tr> <td>???????</td><td><xsl:value-of select="scancode"/></td> </tr> </xsl:if> <xsl:if test="eurocode != ''"> <tr> <td>???????</td><td><xsl:value-of select="eurocode"/></td> </tr> </xsl:if> <xsl:if test="coment != ''"> <tr> <td>???????????</td><td><xsl:value-of select="coment"/></td> </tr> </xsl:if> <xsl:if test="glass_size != ''"> <tr> <td>??????</td><td><xsl:value-of select="glass_size"/></td> </tr> </xsl:if> <xsl:if test="vendor != ''"> <tr> <td>?????????????</td><td><xsl:value-of select="vendor"/></td> </tr> </xsl:if> <xsl:if test="trademark != ''"> <tr> <td>???????? ?????</td><td><xsl:value-of select="trademark"/></td> </tr> </xsl:if> <xsl:if test="fprice != ''"> <tr> <td>????</td><td><xsl:value-of select="fprice"/></td> </tr> </xsl:if> </table> </xsl:template> Update: i wrote: <my:translations xmlns:my="my:my"> <w e="name" r="????????"/> <w e="model" r="??????"/> <w e="year" r="???"/> <w e="glass_type" r="???"/> <w e="scancode" r="???????"/> <w e="eurocode" r="???????"/> <w e="comment" r="???????????"/> <w e="glass_size" r="??????"/> <w e="vendor" r="?????????????"/> <w e="trademark" r="???????? ?????"/> <w e="fprice" r="????"/> </my:translations> <xsl:value-of select="//w/@r"/> And have no result from this code. Is it normal? And how can i get new element w?

    Read the article

  • Mysql - help me optimize this query (improved question)

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) AS key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) AS key_2_total_matches, COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published', CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN tags AS t ON ( t.id_tag = ttagrels.id_tag ) OR ( t.id_tag = lptagrels.id_tag ) OR ( t.id_tag = wtagrels.id_tag ) WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( t.id_tag = lptagrels.id_tag ) AND ( lp.id_lp 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( t.id_tag = wtagrels.id_tag ) AND ( wc.id_wc 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed. The tag field of tags table is indexed. Is there something faulty with the query? What can be the reason behind 20+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query.jpg

    Read the article

  • NSAutoreleasePool carrying across methods?

    - by Tim
    I'm building an iPhone application where I detach some threads to do long-running work in the background so as not to hang the UI. I understand that threads need NSAutoreleasePool instances for memory management. What I'm not sure about is if the threaded method calls another method - does that method also need an NSAutoreleasePool? Example code: - (void)primaryMethod { [self performSelectorInBackground:@selector(threadedMethod) withObject:nil]; } - (void)threadedMethod { NSAutoreleasePool *aPool = [[NSAutoreleasePool alloc] init]; // Some code here [self anotherMethod]; // Maybe more code here [aPool drain]; } - (void)anotherMethod { // More code here } The reason I ask is I'm receiving errors that objects are being autoreleased with no pool in place, and are "just leaking." I've seen other questions where people didn't have autorelease pools in place at all, and I understand why an autorelease pool is needed. I'm specifically interested in finding out whether an autorelease pool created in (in this example) threadedMethod applies to objects created in anotherMethod.

    Read the article

  • Will the compiler optimize escaping an inner loop?

    - by BCS
    The code I have looks like this (all uses of done shown): bool done = false; for(int i = 0; i < big; i++) { ... for(int j = 0; j < wow; j++) { ... if(foo(i,j)) { done = true; break; } ... } if(done) break; ... } will any compilers convert it to this: for(int i = 0; i < big; i++) { ... for(int j = 0; j < wow; j++) { ... if(foo(i,j)) goto __done; // same as a labeled break if we had it ... } ... } __done:;

    Read the article

  • MySQL won't use index for query?

    - by Jack Sleight
    I have this table: CREATE TABLE `point` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `siteid` INT(11) NOT NULL, `lft` INT(11) DEFAULT NULL, `rgt` INT(11) DEFAULT NULL, `level` SMALLINT(6) DEFAULT NULL, PRIMARY KEY (`id`), KEY `point_siteid_site_id` (`siteid`), CONSTRAINT `point_siteid_site_id` FOREIGN KEY (`siteid`) REFERENCES `site` (`id`) ON DELETE CASCADE ) ENGINE=INNODB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci And this query: SELECT * FROM `point` WHERE siteid = 1; Which results in this EXPLAIN information: +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | 1 | SIMPLE | point | ALL | point_siteid_site_id | NULL | NULL | NULL | 6 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ Question is, why isn't the query using the point_siteid_site_id index?

    Read the article

  • Verifying compiler optimizations in gcc/g++ by analyzing assembly listings

    - by Victor Liu
    I just asked a question related to how the compiler optimizes certain C++ code, and I was looking around SO for any questions about how to verify that the compiler has performed certain optimizations. I was trying to look at the assembly listing generated with g++ (g++ -c -g -O2 -Wa,-ahl=file.s file.c) to possibly see what is going on under the hood, but the output is too cryptic to me. What techniques do people use to tackle this problem, and are there any good references on how to interpret the assembly listings of optimized code or articles specific to the GCC toolchain that talk about this problem?

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • How can I Query only __key__ on a Google Appengine PolyModel child?

    - by Gabriel
    So the situation is: I want to optimize my code some for doing counting of records. So I have a parent Model class Base, a PolyModel class Entry, and a child class of Entry Article: How would I query Article.key so I can reduce the query load but only get the Article count. My first thought was to use: q = db.GqlQuery("SELECT __key__ from Article where base = :1", i_base) but it turns out GqlQuery doesn't like that because articles are actually stored in a table called Entry. Would it be possible to Query the class attribute? something like: q = db.GqlQuery("select __key__ from Entry where base = :1 and :2 in class", i_base, 'Article') neither of which work. Turns out the answer is even easier. But I am going to finish this question because I looked everywhere for this. q = db.GqlQuery("select __key__ from Entry where base = :1 and class = :2", i_base, 'Article')

    Read the article

  • Optimizing BeautifulSoup (Python) code

    - by user283405
    I have code that uses the BeautifulSoup library for parsing, but it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me with this? I am using BeautifulSoup for parsing and than save into a DB. If I comment out the save statement, it still takes a long time, so there is no problem with the database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • Why is Javascript's Math.floor the slowest way to calculate floor in Javascript?

    - by z5h
    I'm generally not a fan of microbenchmarks. But this one has a very interesting result. http://ernestdelgado.com/archive/benchmark-on-the-floor/ It suggests that Math.floor is the SLOWEST way to calculate floor in Javascript. ~~n, n|n, n&n all being faster. This seems pretty shocking as I would expect that people implementing Javascript in today's modern browsers would be some pretty smart people. Does floor do something important that the other methods fail to do? Is there any reason to use it?

    Read the article

  • Splitting tables by field to optimize MySQL?

    - by AK
    Do splitting fields into multiple tables ever yield faster queries? Consider the following two scenarios: Table1 ----------- int PersonID text Value1 float Value2 or Table1 ----------- int PersonID text Value1 Table2 ----------- int PersonID float Value2 If Value1 and Value2 are always being displayed together, I imagine Table1 is always faster because the second schema would require two SELECT statements. But are there any situations where you would choose the second? If the number of records were expected to be really large?

    Read the article

  • Is there a faster TList implementation ?

    - by dmauric.mp
    My application makes heavy use of TList, so I was wondering if there are any alternative implementations that are faster or optimized for particular use case. I know of RtlVCLOptimize.pas 2.77, which has optimized implementations of several TList methods. But I'd like to know if there is anything else out there. I also don't require it to be a TList descendant, I just need the TList functionality regardless of how it's implemented. It's entirely possible, given the rather basic functionality TList provides, that there is not much room for improvement, but would still like to verify that, hence this question.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >