Search Results

Search found 22292 results on 892 pages for 'image optimization'.

Page 188/892 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • C# Sorting Question

    - by betamoo
    I wonder what is the best C# data structure I should use to sort efficiently? Is it List or Array or what? And why the standard array [] does not implement sort method in it? Thanks

    Read the article

  • Enforcing a query in MySql to use a specific index

    - by Hossein
    Hi, I have large table. consisting of only 3 columns (id(INT),bookmarkID(INT),tagID(INT)).I have two BTREE indexes one for each bookmarkID and tagID columns.This table has about 21 Million records. I am trying to run this query: SELECT bookmarkID,COUNT(bookmarkID) AS count FROM bookmark_tag_map GROUP BY tagID,bookmarkID HAVING tagID IN (-----"tagIDList"-----) AND count >= N which takes ages to return the results.I read somewhere that if make an index in which it has tagID,bookmarkID together, i will get a much faster result. I created the index after some time. Tried the query again, but it seems that this query is not using the new index that I have made.I ran EXPLAIN and saw that it is actually true. My question now is that how I can enforce a query to use a specific index? also comments on other ways to make the query faster are welcome. Thanks

    Read the article

  • Create date efficiently

    - by Dave Jarvis
    On Pavel's page is the following function: CREATE OR REPLACE FUNCTION makedate(year int, dayofyear int) RETURNS date AS $$ SELECT (date '0001-01-01' + ($1 - 1) * interval '1 year' + ($2 - 1) * interval '1 day'):: date $$ LANGUAGE sql; I have the following code: makedate(y.year,1) What is the fastest way in PostgreSQL to create a date for January 1st of a given year? Pavel's function would lead me to believe it is: date '0001-01-01' + y.year * interval '1 year' + interval '1 day'; My thought would be more like: to_date( y.year||'-1-1', 'YYYY-MM-DD'); Am looking for the fastest way using PostgreSQL 8.4. (The query that uses the date function can select between 100,000 and 1 million records, so it needs speed.) Thank you!

    Read the article

  • Making a DVD video with a still image and PCM 16bit audio with ffmpeg

    - by João
    I'm trying to make a small video with a still image and a sound file playing in the background to pass it to dvdauthor and create a DVD. The command I'm using is this: ffmpeg -loop_input -i image.jpg -qscale 2 -i song.flac -aspect 4:3 -target pal-dvd -acodec pcm_s16le -shortest output.mpg However, the resulting video file doesn't have sound at all (testing it on VLC Player). I don't know if I can't combine "-acodec pcm_s16le" with "-target pal-dvd" to override the later, or if there is something else wrong with the command. If I try without the "-acodec pcm_s16le" parameter the video and audio works, I can even create a DVD ISO with it. However, the audio stays as AC3. I wanted to include with the video the lossless audio, not a compressed one. I suppose the DVD standart allows to have PCM audio in it, am I right?

    Read the article

  • Is there a way to tell JVM to optimize my code before processing?

    - by Rogach
    I have a method, which takes much time to execute first time. But after several invocations, it takes about 30 times less time. So, to make my application respond to user interaction faster, I "warm-up" this method (5 times) with some sample data on initialization of application. But this increases app start-up time. I read, that JVM's can optimize and compile my java code to native, thus speeding things up. I wanted to know - maybe there is some way to explicitly tell JVM that I want this method to be compiled on startup of application?

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • How to simply this logic/code?

    - by Tattat
    I want to write an apps that accepts user command. The user command is used in this format: command -parameter For example, the app can have "Copy", "Paste", "Delete" command I am thinking the program should work like this : public static void main(String args[]){ if(args[0].equalsIgnoreCase("COPY")){ //handle the copy command } else if(args[0].equalsIgnoreCase("PASTE")){ //handle the copy command }/** code skipped **/ } So, it works, but I think it will become more and more complex when I have more command in my program, also, it is different to read. Any ideas to simply the logic?

    Read the article

  • SQL-query task, decision?

    - by Sirius Lampochkin
    There is a table of currencies rates in MS SQL Server 2005: ID | CURR | RATE | DATE 1   | USD   | 30      | 01.10.2010 3   | GBP   | 45      | 07.10.2010 5   | USD   | 31      | 08.10.2010 7   | GBP   | 46      | 09.10.2010 9   | USD   | 32      | 12.10.2010 11 | GBP   | 48      | 03.10.2010 Rate are updated in real time and there are more than 1 billion rows in the table. It needs to write a SQL-query, wich will provide latest rates per each currency. My decision is: SELECT c.[id],c.[curr],c.[rate],c.[date] FROM [curr_rate] c, (SELECT curr, MAX(date) AS rate_date FROM [curr_rate] GROUP BY curr) t WHERE c.date = t.rate_date AND c.curr = t.curr ORDER BY c.[curr] ASC Is it possible to write a query without sub-queries and join's with derived tables?

    Read the article

  • Detecting a image 404 in javascript.

    - by xal
    After a user uploads a file we have to do some additional processing with the images such as resizing and upload to S3. This can take up to 10 extra seconds. Obviously we do this in a background. However, we want to show the user the result page immediately and simply show spinners in place until the images arrive in their permanent home on s3. I'm looking for a way to detect that a certain image failed to load correctly (404) in a cross browser way. If that happens, we want to use JS to show a spinner in it's place and reload the image every few seconds until it can be successfully loaded from s3.

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • Simple MySQL Query taking 45 seconds (Gets a record and its "latest" child record)

    - by Brian Lacy
    I have a query which gets a customer and the latest transaction for that customer. Currently this query takes over 45 seconds for 1000 records. This is especially problematic because the script itself may need to be executed as frequently as once per minute! I believe using subqueries may be the answer, but I've had trouble constructing it to actually give me the results I need. SELECT customer.CustID, customer.leadid, customer.Email, customer.FirstName, customer.LastName, transaction.*, MAX(transaction.TransDate) AS LastTransDate FROM customer INNER JOIN transaction ON transaction.CustID = customer.CustID WHERE customer.Email = '".$email."' GROUP BY customer.CustID ORDER BY LastTransDate LIMIT 1000 I really need to get this figured out ASAP. Any help would be greatly appreciated!

    Read the article

  • Optimizing WordWrap Algorithm

    - by Milo
    I have a word-wrap algorithm that basically generates lines of text that fit the width of the text. Unfortunately, it gets slow when I add too much text. I was wondering if I oversaw any major optimizations that could be made. Also, if anyone has a design that would still allow strings of lines or string pointers of lines that is better I'd be open to rewriting the algorithm. Thanks void AguiTextBox::makeLinesFromWordWrap() { textRows.clear(); textRows.push_back(""); std::string curStr; std::string curWord; int curWordWidth = 0; int curLetterWidth = 0; int curLineWidth = 0; bool isVscroll = isVScrollNeeded(); int voffset = 0; if(isVscroll) { voffset = pChildVScroll->getWidth(); } int AdjWidthMinusVoffset = getAdjustedWidth() - voffset; int len = getTextLength(); int bytesSkipped = 0; int letterLength = 0; size_t ind = 0; for(int i = 0; i < len; ++i) { //get the unicode character letterLength = _unicodeFunctions.bringToNextUnichar(ind,getText()); curStr = getText().substr(bytesSkipped,letterLength); bytesSkipped += letterLength; curLetterWidth = getFont().getTextWidth(curStr); //push a new line if(curStr[0] == '\n') { textRows.back() += curWord; curWord = ""; curLetterWidth = 0; curWordWidth = 0; curLineWidth = 0; textRows.push_back(""); continue; } //ensure word is not longer than the width if(curWordWidth + curLetterWidth >= AdjWidthMinusVoffset && curWord.length() >= 1) { textRows.back() += curWord; textRows.push_back(""); curWord = ""; curWordWidth = 0; curLineWidth = 0; } //add letter to word curWord += curStr; curWordWidth += curLetterWidth; //if we need a Vscroll bar start over if(!isVscroll && isVScrollNeeded()) { isVscroll = true; voffset = pChildVScroll->getWidth(); AdjWidthMinusVoffset = getAdjustedWidth() - voffset; i = -1; curWord = ""; curStr = ""; textRows.clear(); textRows.push_back(""); ind = 0; curWordWidth = 0; curLetterWidth = 0; curLineWidth = 0; bytesSkipped = 0; continue; } if(curLineWidth + curWordWidth >= AdjWidthMinusVoffset && textRows.back().length() >= 1) { textRows.push_back(""); curLineWidth = 0; } if(curStr[0] == ' ' || curStr[0] == '-') { textRows.back() += curWord; curLineWidth += curWordWidth; curWord = ""; curWordWidth = 0; } } if(curWord != "") { textRows.back() += curWord; } updateWidestLine(); }

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • PHP Hashtable array optimisation.

    - by hiprakhar
    I made a PHP app which was taking about ~0.0070sec for execution. Now, I added a hashtable array with about 2000 values. Suddenly the time for execution has gone up to ~0.0700 secs. Almost 10 times the previous value. I tried commenting out the part where I was searching inside the hashtable array (but array was still left defined). Still, the execution time remains about ~0.0500secs. Array is something like: $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); Is there any way to optimize this part? I cannot use Database as I am running this app on Google app engine which is still not supporting JDO database for php. Some more code from the app: function getsubjectinfo($name) { $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); $name = str_replace("-", "", $name); $name = str_replace(" ", "", $name); if (isset($subjectinfo["$name"])) return "(".$subjectinfo["$name"].")"; else return ""; } Then I am using the following statement 2-3 times in the app: echo $key." ".$this->getsubjectinfo($key)

    Read the article

  • image decoding problem in android

    - by achie
    Some of the images are not being displayed in the web browser in android while they work fine on all other machines and mobile devices. this is an example of one of those images http://s3.amazonaws.com/itriage/logos/19/iphone_list.jpg?1261515055 So I tried to pull the image to see it if it works from code. This is what I did URL url = new URL(address); InputStream is = (InputStream) url.getContent(); Drawable d = Drawable.createFromStream(is, "src"); It works for most images but for some images including this images it gives this error D/skia (28314): --- decoder-decode returned false Why is this happening and how can I prevent this. I saw an example on the developers forum but thats when we are accessing the image directly. But what I want is the browser to handle it. So how do I encode my images on my server for the android devices to recognise them correctly?

    Read the article

  • How fast can you make linear search?

    - by Mark Probst
    I'm looking to optimize this linear search: static int linear (const int *arr, int n, int key) { int i = 0; while (i < n) { if (arr [i] >= key) break; ++i; } return i; } The array is sorted and the function is supposed to return the index of the first element that is greater or equal to the key. They array is not large (below 200 elements) and will be prepared once for a large number of searches. Array elements after the n-th can if necessary be initialized to something appropriate, if that speeds up the search. No, binary search is not allowed, only linear search.

    Read the article

  • Optimizing Haskell code

    - by Masse
    I'm trying to learn Haskell and after an article in reddit about Markov text chains, I decided to implement Markov text generation first in Python and now in Haskell. However I noticed that my python implementation is way faster than the Haskell version, even Haskell is compiled to native code. I am wondering what I should do to make the Haskell code run faster and for now I believe it's so much slower because of using Data.Map instead of hashmaps, but I'm not sure I'll post the Python code and Haskell as well. With the same data, Python takes around 3 seconds and Haskell is closer to 16 seconds. It comes without saying that I'll take any constructive criticism :). import random import re import cPickle class Markov: def __init__(self, filenames): self.filenames = filenames self.cache = self.train(self.readfiles()) picklefd = open("dump", "w") cPickle.dump(self.cache, picklefd) picklefd.close() def train(self, text): splitted = re.findall(r"(\w+|[.!?',])", text) print "Total of %d splitted words" % (len(splitted)) cache = {} for i in xrange(len(splitted)-2): pair = (splitted[i], splitted[i+1]) followup = splitted[i+2] if pair in cache: if followup not in cache[pair]: cache[pair][followup] = 1 else: cache[pair][followup] += 1 else: cache[pair] = {followup: 1} return cache def readfiles(self): data = "" for filename in self.filenames: fd = open(filename) data += fd.read() fd.close() return data def concat(self, words): sentence = "" for word in words: if word in "'\",?!:;.": sentence = sentence[0:-1] + word + " " else: sentence += word + " " return sentence def pickword(self, words): temp = [(k, words[k]) for k in words] results = [] for (word, n) in temp: results.append(word) if n > 1: for i in xrange(n-1): results.append(word) return random.choice(results) def gentext(self, words): allwords = [k for k in self.cache] (first, second) = random.choice(filter(lambda (a,b): a.istitle(), [k for k in self.cache])) sentence = [first, second] while len(sentence) < words or sentence[-1] is not ".": current = (sentence[-2], sentence[-1]) if current in self.cache: followup = self.pickword(self.cache[current]) sentence.append(followup) else: print "Wasn't able to. Breaking" break print self.concat(sentence) Markov(["76.txt"]) -- module Markov ( train , fox ) where import Debug.Trace import qualified Data.Map as M import qualified System.Random as R import qualified Data.ByteString.Char8 as B type Database = M.Map (B.ByteString, B.ByteString) (M.Map B.ByteString Int) train :: [B.ByteString] -> Database train (x:y:[]) = M.empty train (x:y:z:xs) = let l = train (y:z:xs) in M.insertWith' (\new old -> M.insertWith' (+) z 1 old) (x, y) (M.singleton z 1) `seq` l main = do contents <- B.readFile "76.txt" print $ train $ B.words contents fox="The quick brown fox jumps over the brown fox who is slow jumps over the brown fox who is dead."

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • WPF, how can I optimize lines and circles drawing ?

    - by Aurélien Ribon
    Hello ! I am developping an application where I need to draw a graph on the screen. For this purpose, I use a Canvas and I put Controls on it. An example of such a draw as shown in the app can be found here : http://free0.hiboox.com/images/1610/d82e0b7cc3521071ede601d3542c7bc5.png It works fine for simple graphs, but I also want to be able to draw very large graphs (hundreds of nodes). And when I try to draw a very large graph, it takes a LOT of time to render. My problem is that the code is not optimized at all, I just wanted it to work. Until now, I have a Canvas on the one hand, and multiple Controls on the other hands. Actually, circles and lines are listed in collections, and for each item of these collections, I use a ControlTemplate, defining a red circle, a black circle, a line, etc. Here is an example, the definition of a graph circle : <!-- STYLE : DISPLAY DATA NODE --> <Style TargetType="{x:Type flow.elements:DisplayNode}"> <Setter Property="Canvas.Left" Value="{Binding X, RelativeSource={RelativeSource Self}}" /> <Setter Property="Canvas.Top" Value="{Binding Y, RelativeSource={RelativeSource Self}}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type flow.elements:DisplayNode}"> <!--TEMPLATE--> <Grid x:Name="grid" Margin="-30,-30,0,0"> <Ellipse x:Name="selectionEllipse" StrokeThickness="0" Width="60" Height="60" Opacity="0" IsHitTestVisible="False"> <Ellipse.Fill> <RadialGradientBrush> <GradientStop Color="Black" Offset="0.398" /> <GradientStop Offset="1" /> </RadialGradientBrush> </Ellipse.Fill> </Ellipse> <Ellipse Stroke="Black" Width="30" Height="30" x:Name="ellipse"> <Ellipse.Fill> <LinearGradientBrush EndPoint="0,1"> <GradientStop Offset="0" Color="White" /> <GradientStop Offset="1.5" Color="LightGray" /> </LinearGradientBrush> </Ellipse.Fill> </Ellipse> <TextBlock x:Name="tblock" Text="{Binding NodeName, RelativeSource={RelativeSource Mode=TemplatedParent}}" Foreground="Black" VerticalAlignment="Center" HorizontalAlignment="Center" FontSize="10.667" /> </Grid> <!--TRIGGERS--> <ControlTemplate.Triggers> <!--DATAINPUT--> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="SkinMode" Value="NODETYPE" /> <Condition Property="NodeType" Value="DATAINPUT" /> </MultiTrigger.Conditions> <Setter TargetName="tblock" Property="Foreground" Value="White" /> <Setter TargetName="ellipse" Property="Fill"> <Setter.Value> <LinearGradientBrush EndPoint="0,1"> <GradientStop Offset="-0.5" Color="White" /> <GradientStop Offset="1" Color="Black" /> </LinearGradientBrush> </Setter.Value> </Setter> </MultiTrigger> <!--DATAOUTPUT--> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="SkinMode" Value="NODETYPE" /> <Condition Property="NodeType" Value="DATAOUTPUT" /> </MultiTrigger.Conditions> <Setter TargetName="tblock" Property="Foreground" Value="White" /> <Setter TargetName="ellipse" Property="Fill"> <Setter.Value> <LinearGradientBrush EndPoint="0,1"> <GradientStop Offset="-0.5" Color="White" /> <GradientStop Offset="1" Color="Black" /> </LinearGradientBrush> </Setter.Value> </Setter> </MultiTrigger> ....... THERE IS A TOTAL OF 7 MULTITRIGGERS ....... </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Also, the lines are drawn using the Line Control. <!-- STYLE : DISPLAY LINK --> <Style TargetType="{x:Type flow.elements:DisplayLink}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type flow.elements:DisplayLink}"> <!--TEMPLATE--> <Line X1="{Binding X1, RelativeSource={RelativeSource TemplatedParent}}" X2="{Binding X2, RelativeSource={RelativeSource TemplatedParent}}" Y1="{Binding Y1, RelativeSource={RelativeSource TemplatedParent}}" Y2="{Binding Y2, RelativeSource={RelativeSource TemplatedParent}}" Stroke="Gray" StrokeThickness="2" x:Name="line" /> <!--TRIGGERS--> <ControlTemplate.Triggers> <!--BRANCH : ASSERTION--> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="SkinMode" Value="BRANCHTYPE" /> <Condition Property="BranchType" Value="ASSERTION" /> </MultiTrigger.Conditions> <Setter TargetName="line" Property="Stroke" Value="#E0E0E0" /> </MultiTrigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> So, I need your advices. How can I drastically improve the rendering performances ? Should I define each MultiTrigger circle rendering possibility in its own ControlTemplate instead ? Is there a better line drawing technique ? Should I open a DrawingContext and draw everything in one control, instead of having hundreds of controls ?

    Read the article

  • CSS background image

    - by Christopher
    I have .green { background-image:('http://i42.tinypic.com/xzup2.jpg') background-repeat: no-repeat; height:430px; width:100%; } .yellow { background-image: url('/images/yellowlight.png'); background-repeat: no-repeat; height:430px; width:100%; } in my css. .yellow works fine, but .green. does not work. What's wrong?

    Read the article

  • will a mysql query run slower if one of the tables involved has no index defined??

    - by lock
    there's this already populated database which came from another dev im not sure what went on that dev's mind when he created the tables, but on one of our scripts there is this query involving 4 tables and it runs super slow SELECT a.col_1, a.col_2, a.col_3, a.col_4, a.col_5, a.col_6, a.col_7 FROM a, b, c, d WHERE a.id = b.id AND b.c_id = c.id AND c.id = d.c_id AND a.col_8 = '$col_8' AND d.g_id = '$g_id' AND c.private = '1' NOTE: $col_8 and $g_id are variables from a form its only my theory that it's due to tables b and c not having an index, although im guessing that the dev didnt think that it was necessary since those tables only tell relations between a and d, where b tells that the data in a belongs to a certain user, and c tells that the user belongs to a group in d as you can see, there's not even a join or other extensive query functions used but this query which returns only around 100 rows takes 2 minutes to execute. anyway my question is simply this post's title. will a mysql query run slower if one of the tables involved has no index defined??

    Read the article

  • PHP Initialising strings as boolean first

    - by Anriëtte Myburgh
    I'm in the habit of initialising variables in PHP to false and then applying whatever (string, boolean, float) value to it later. Which would you reckon is better? $name = false; if (condition == true) { $name = $something_else; } if ($name) { …do something… } vs. $name =''; if (condition == true) { $name = $something_else; } if (!empty($name)) { …do something… } Which would you reckon can possibly give better performance? Which method would you use?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >