Search Results

Search found 3422 results on 137 pages for 'optimization'.

Page 60/137 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • most efficient method of turning multiple 1D arrays into columns of a 2D array

    - by Ty W
    As I was writing a for loop earlier today, I thought that there must be a neater way of doing this... so I figured I'd ask. I looked briefly for a duplicate question but didn't see anything obvious. The Problem: Given N arrays of length M, turn them into a M-row by N-column 2D array Example: $id = [1,5,2,8,6] $name = [a,b,c,d,e] $result = [[1,a], [5,b], [2,c], [8,d], [6,e]] My Solution: Pretty straight forward and probably not optimal, but it does work: <?php // $row is returned from a DB query // $row['<var>'] is a comma separated string of values $categories = array(); $ids = explode(",", $row['ids']); $names = explode(",", $row['names']); $titles = explode(",", $row['titles']); for($i = 0; $i < count($ids); $i++) { $categories[] = array("id" => $ids[$i], "name" => $names[$i], "title" => $titles[$i]); } ?> note: I didn't put the name = value bit in the spec, but it'd be awesome if there was some way to keep that as well.

    Read the article

  • Is this implementation truely tail-recursive?

    - by CFP
    Hello everyone! I've come up with the following code to compute in a tail-recursive way the result of an expression such as 3 4 * 1 + cos 8 * (aka 8*cos(1+(3*4))) The code is in OCaml. I'm using a list refto emulate a stack. type token = Num of float | Fun of (float->float) | Op of (float->float->float);; let pop l = let top = (List.hd !l) in l := List.tl (!l); top;; let push x l = l := (x::!l);; let empty l = (l = []);; let pile = ref [];; let eval data = let stack = ref data in let rec _eval cont = match (pop stack) with | Num(n) -> cont n; | Fun(f) -> _eval (fun x -> cont (f x)); | Op(op) -> _eval (fun x -> cont (op x (_eval (fun y->y)))); in _eval (fun x->x) ;; eval [Fun(fun x -> x**2.); Op(fun x y -> x+.y); Num(1.); Num(3.)];; I've used continuations to ensure tail-recursion, but since my stack implements some sort of a tree, and therefore provides quite a bad interface to what should be handled as a disjoint union type, the call to my function to evaluate the left branch with an identity continuation somehow irks a little. Yet it's working perfectly, but I have the feeling than in calling the _eval (fun y->y) bit, there must be something wrong happening, since it doesn't seem that this call can replace the previous one in the stack structure... Am I misunderstanding something here? I mean, I understand that with only the first call to _eval there wouldn't be any problem optimizing the calls, but here it seems to me that evaluation the _eval (fun y->y) will require to be stacked up, and therefore will fill the stack, possibly leading to an overflow... Thanks!

    Read the article

  • Faster way to split a string and count characters using R?

    - by chrisamiller
    I'm looking for a faster way to calculate GC content for DNA strings read in from a FASTA file. This boils down to taking a string and counting the number of times that the letter 'G' or 'C' appears. I also want to specify the range of characters to consider. I have a working function that is fairly slow, and it's causing a bottleneck in my code. It looks like this: ## ## count the number of GCs in the characters between start and stop ## gcCount <- function(line, st, sp){ chars = strsplit(as.character(line),"")[[1]] numGC = 0 for(j in st:sp){ ##nested ifs faster than an OR (|) construction if(chars[[j]] == "g"){ numGC <- numGC + 1 }else if(chars[[j]] == "G"){ numGC <- numGC + 1 }else if(chars[[j]] == "c"){ numGC <- numGC + 1 }else if(chars[[j]] == "C"){ numGC <- numGC + 1 } } return(numGC) } Running Rprof gives me the following output: > a = "GCCCAAAATTTTCCGGatttaagcagacataaattcgagg" > Rprof(filename="Rprof.out") > for(i in 1:500000){gcCount(a,1,40)}; > Rprof(NULL) > summaryRprof(filename="Rprof.out") self.time self.pct total.time total.pct "gcCount" 77.36 76.8 100.74 100.0 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.58 3.6 3.64 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $by.total total.time total.pct self.time self.pct "gcCount" 100.74 100.0 77.36 76.8 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.64 3.6 3.58 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $sampling.time [1] 100.74 Any advice for making this code faster?

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • Does the order of columns in a query matter?

    - by James Simpson
    When selecting columns from a MySQL table, is performance affected by the order that you select the columns as compared to their order in the table (not considering indexes that may cover the columns)? For example, you have a table with rows uid, name, bday, and you have the following query. SELECT uid, name, bday FROM table Does MySQL see the following query any differently and thus cause any sort of performance hit? SELECT uid, bday, name FROM table

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • Oracle Sql Query taking a day long to return results using dblink

    - by Suresh S
    Guys i have the following oracle sql query that gives me the monthwise report between the dates. Basically for nov month i want sum of values between the dates 01nov to 30 nov. The table tha is being queried is residing in another database and accesssed using dblink. The DT columns is of NUMBER type (for ex 20101201) .The execution of the query is taking a day long and not completed. kindly suggest me , if their is any optimisation that can be suggested to my DBA on the dblink, or any tuning that can be done on the query , or rewriting the same. SELECT /*+ PARALLEL (A 8) */ TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM')- 1,'MM'),'MONYYYY') "MONTH", TYPE AS "TYPE", COLUMN, COUNT (DISTINCT A) AS "A_COUNT", COUNT (COLUMN) AS NO_OF_COLS, SUM (DURATION) AS "SUM_DURATION", SUM (COST) AS "COST" FROM **A@LN_PROD A** WHERE DT >=TO_NUMBER(TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM')-1,'MM'),'YYYYMMDD')) AND DT < TO_NUMBER(TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM'),'MM'),'YYYYMMDD')) GROUP BY TYPE, COLUMN

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • Splitting tables by field to optimize MySQL?

    - by AK
    Do splitting fields into multiple tables ever yield faster queries? Consider the following two scenarios: Table1 ----------- int PersonID text Value1 float Value2 or Table1 ----------- int PersonID text Value1 Table2 ----------- int PersonID float Value2 If Value1 and Value2 are always being displayed together, I imagine Table1 is always faster because the second schema would require two SELECT statements. But are there any situations where you would choose the second? If the number of records were expected to be really large?

    Read the article

  • Optimizing Vector elements swaps using CUDA

    - by Orion Nebula
    Hi all, Since I am new to cuda .. I need your kind help I have this long vector, for each group of 24 elements, I need to do the following: for the first 12 elements, the even numbered elements are multiplied by -1, for the second 12 elements, the odd numbered elements are multiplied by -1 then the following swap takes place: Graph: because I don't yet have enough points, I couldn't post the image so here it is: http://www.freeimagehosting.net/image.php?e4b88fb666.png I have written this piece of code, and wonder if you could help me further optimize it to solve for divergence or bank conflicts .. //subvector is a multiple of 24, Mds and Nds are shared memory _shared_ double Mds[subVector]; _shared_ double Nds[subVector]; int tx = threadIdx.x; int tx_mod = tx ^ 0x0001; int basex = __umul24(blockDim.x, blockIdx.x); Mds[tx] = M.elements[basex + tx]; __syncthreads(); // flip the signs if (tx < (tx/24)*24 + 12) { //if < 12 and even if ((tx & 0x0001)==0) Mds[tx] = -Mds[tx]; } else if (tx < (tx/24)*24 + 24) { //if >12 and < 24 and odd if ((tx & 0x0001)==1) Mds[tx] = -Mds[tx]; } __syncthreads(); if (tx < (tx/24)*24 + 6) { //for the first 6 elements .. swap with last six in the 24elements group (see graph) Nds[tx] = Mds[tx_mod + 18]; Mds [tx_mod + 18] = Mds [tx]; Mds[tx] = Nds[tx]; } else if (tx < (tx/24)*24 + 12) { // for the second 6 elements .. swp with next adjacent group (see graph) Nds[tx] = Mds[tx_mod + 6]; Mds [tx_mod + 6] = Mds [tx]; Mds[tx] = Nds[tx]; } __syncthreads(); Thanks in advance ..

    Read the article

  • Can anyone recommend a decent tool for optimising images other than photoshop

    - by toomanyairmiles
    Can anyone recommend a decent tool for optimising images other than adobe photoshop, the gimp etc? I'm looking to optimise images for the web preferably online and free. Basically I have a client who can't install additional software on their work PC but needs to optimise photographs and other images for their website and is presently uploading 1 or 2 Mb files. On a personal level I'm interested to see what other people are using...

    Read the article

  • Efficient implementation of natural logarithm (ln) and exponentiation

    - by Donotalo
    Basically, I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory. The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.

    Read the article

  • What does ER_WARN_FIELD_RESOLVED mean?

    - by VolkerK
    When SHOW WARNINGS after a EXPLAIN EXTENDED shows a Note 1276 Field or reference 'test.foo.bar' of SELECT #2 was resolved in SELECT #1 what exactly does that mean and what impact does it have? In my case it prevents mysql from using what seems to be a perfectly good index. But it's not about fixing that specific query (as it is an irrelevant test). I found http://dev.mysql.com/doc/refman/5.0/en/error-messages-server.html butError: 1276 SQLSTATE: HY000 (ER_WARN_FIELD_RESOLVED) Message: Field or reference '%s%s%s%s%s' of SELECT #%d was resolved in SELECT #%d isn't much of an explaination.

    Read the article

  • MySQL won't use index for query?

    - by Jack Sleight
    I have this table: CREATE TABLE `point` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `siteid` INT(11) NOT NULL, `lft` INT(11) DEFAULT NULL, `rgt` INT(11) DEFAULT NULL, `level` SMALLINT(6) DEFAULT NULL, PRIMARY KEY (`id`), KEY `point_siteid_site_id` (`siteid`), CONSTRAINT `point_siteid_site_id` FOREIGN KEY (`siteid`) REFERENCES `site` (`id`) ON DELETE CASCADE ) ENGINE=INNODB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci And this query: SELECT * FROM `point` WHERE siteid = 1; Which results in this EXPLAIN information: +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | 1 | SIMPLE | point | ALL | point_siteid_site_id | NULL | NULL | NULL | 6 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ Question is, why isn't the query using the point_siteid_site_id index?

    Read the article

  • "Anagram solver" based on statistics rather than a dictionary/table?

    - by James M.
    My problem is conceptually similar to solving anagrams, except I can't just use a dictionary lookup. I am trying to find plausible words rather than real words. I have created an N-gram model (for now, N=2) based on the letters in a bunch of text. Now, given a random sequence of letters, I would like to permute them into the most likely sequence according to the transition probabilities. I thought I would need the Viterbi algorithm when I started this, but as I look deeper, the Viterbi algorithm optimizes a sequence of hidden random variables based on the observed output. I am trying to optimize the output sequence. Is there a well-known algorithm for this that I can read about? Or am I on the right track with Viterbi and I'm just not seeing how to apply it?

    Read the article

  • Is there a faster TList implementation ?

    - by dmauric.mp
    My application makes heavy use of TList, so I was wondering if there are any alternative implementations that are faster or optimized for particular use case. I know of RtlVCLOptimize.pas 2.77, which has optimized implementations of several TList methods. But I'd like to know if there is anything else out there. I also don't require it to be a TList descendant, I just need the TList functionality regardless of how it's implemented. It's entirely possible, given the rather basic functionality TList provides, that there is not much room for improvement, but would still like to verify that, hence this question.

    Read the article

  • Why is Javascript's Math.floor the slowest way to calculate floor in Javascript?

    - by z5h
    I'm generally not a fan of microbenchmarks. But this one has a very interesting result. http://ernestdelgado.com/archive/benchmark-on-the-floor/ It suggests that Math.floor is the SLOWEST way to calculate floor in Javascript. ~~n, n|n, n&n all being faster. This seems pretty shocking as I would expect that people implementing Javascript in today's modern browsers would be some pretty smart people. Does floor do something important that the other methods fail to do? Is there any reason to use it?

    Read the article

  • Nodes set of the same type with if-test. Make it less.

    - by Kalinin
    How to make the code more beautiful (compact)? <xsl:template match="part"> <table class="part"> <xsl:if test="name != ''"> <tr> <td>????????</td><td><xsl:value-of select="name"/></td> </tr> </xsl:if> <xsl:if test="model != ''"> <tr> <td>??????</td><td><xsl:value-of select="model"/></td> </tr> </xsl:if> <xsl:if test="year != ''"> <tr> <td>???</td><td><xsl:value-of select="year"/></td> </tr> </xsl:if> <xsl:if test="glass_type != ''"> <tr> <td>???</td><td><xsl:value-of select="glass_type"/></td> </tr> </xsl:if> <xsl:if test="scancode != ''"> <tr> <td>???????</td><td><xsl:value-of select="scancode"/></td> </tr> </xsl:if> <xsl:if test="eurocode != ''"> <tr> <td>???????</td><td><xsl:value-of select="eurocode"/></td> </tr> </xsl:if> <xsl:if test="coment != ''"> <tr> <td>???????????</td><td><xsl:value-of select="coment"/></td> </tr> </xsl:if> <xsl:if test="glass_size != ''"> <tr> <td>??????</td><td><xsl:value-of select="glass_size"/></td> </tr> </xsl:if> <xsl:if test="vendor != ''"> <tr> <td>?????????????</td><td><xsl:value-of select="vendor"/></td> </tr> </xsl:if> <xsl:if test="trademark != ''"> <tr> <td>???????? ?????</td><td><xsl:value-of select="trademark"/></td> </tr> </xsl:if> <xsl:if test="fprice != ''"> <tr> <td>????</td><td><xsl:value-of select="fprice"/></td> </tr> </xsl:if> </table> </xsl:template> Update: i wrote: <my:translations xmlns:my="my:my"> <w e="name" r="????????"/> <w e="model" r="??????"/> <w e="year" r="???"/> <w e="glass_type" r="???"/> <w e="scancode" r="???????"/> <w e="eurocode" r="???????"/> <w e="comment" r="???????????"/> <w e="glass_size" r="??????"/> <w e="vendor" r="?????????????"/> <w e="trademark" r="???????? ?????"/> <w e="fprice" r="????"/> </my:translations> <xsl:value-of select="//w/@r"/> And have no result from this code. Is it normal? And how can i get new element w?

    Read the article

  • How can I Query only __key__ on a Google Appengine PolyModel child?

    - by Gabriel
    So the situation is: I want to optimize my code some for doing counting of records. So I have a parent Model class Base, a PolyModel class Entry, and a child class of Entry Article: How would I query Article.key so I can reduce the query load but only get the Article count. My first thought was to use: q = db.GqlQuery("SELECT __key__ from Article where base = :1", i_base) but it turns out GqlQuery doesn't like that because articles are actually stored in a table called Entry. Would it be possible to Query the class attribute? something like: q = db.GqlQuery("select __key__ from Entry where base = :1 and :2 in class", i_base, 'Article') neither of which work. Turns out the answer is even easier. But I am going to finish this question because I looked everywhere for this. q = db.GqlQuery("select __key__ from Entry where base = :1 and class = :2", i_base, 'Article')

    Read the article

  • Why is doing a top(1) on an indexed column in SQL Server slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column (campaignid_int) is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work? edit: estimated execution plan: select (0%) - top (0%) - clustered index scan (100%)

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >