Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 185/809 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • mysql query to concat information from 3 tables - getting incorrect result count

    - by iPfaffy
    I have 3 tables in my database. ab_contacts id first_name last_name addressbook_id ab_addressbooks name id co_comments id link_id comment I'd like to create a query that will let me select all the contacts and comments related to them in a given addressbook. To select all the people in a given addressbook, I can use: select count(*) from ab_contacts where addressbook_id = '50'; This returns 8152 people. However, when I run my query: select ab_contacts.first_name, ab_contacts.last_name, ab_contacts.email, ab_addressbooks.name, co_comments.comments from ab_contacts JOIN ab_addressbooks ON (ab_contacts.addressbook_id = ab_addressbooks.id) JOIN co_comments ON (ab_contacts.id = co_comments.link_id) WHERE ab_contacts.addressbook_id = '50';` the format works, but I only get 1045 results. I'm sure there is something I am missing, but I cannot figure it out. Any help would be greatly appreciated.

    Read the article

  • Merging and splitting overlapping rectangles to produce non-overlapping ones

    - by uj
    I am looking for an algorithm as follows: Given a set of possibly overlapping rectangles (All of which are "not rotated", can be uniformly represented as (left,top,right,bottom) tuplets, etc...), it returns a minimal set of (non-rotated) non-overlapping rectangles, that occupy the same area. It seems simple enough at first glance, but prooves to be tricky (at least to be done efficiently). Are there some known methods for this/ideas/pointers? Methods for not necessarily minimal, but heuristicly small, sets, are interesting as well, so are methods that produce any valid output set at all.

    Read the article

  • Sorting a list of numbers with modified cost

    - by David
    First, this was one of the four problems we had to solve in a project last year and I couldn’t find a suitable algorithm so we handle in a brute force solution. Problem: The numbers are in a list that is not sorted and supports only one type of operation. The operation is defined as follows: Given a position i and a position j the operation moves the number at position i to position j without altering the relative order of the other numbers. If i j, the positions of the numbers between positions j and i - 1 increment by 1, otherwise if i < j the positions of the numbers between positions i+1 and j decreases by 1. This operation requires i steps to find a number to move and j steps to locate the position to which you want to move it. Then the number of steps required to move a number of position i to position j is i+j. We need to design an algorithm that given a list of numbers, determine the optimal (in terms of cost) sequence of moves to rearrange the sequence. Attempts: Part of our investigation was around NP-Completeness, we make it a decision problem and try to find a suitable transformation to any of the problems listed in Garey and Johnson’s book: Computers and Intractability with no results. There is also no direct reference (from our point of view) to this kind of variation in Donald E. Knuth’s book: The art of Computer Programing Vol. 3 Sorting and Searching. We also analyzed algorithms to sort linked lists but none of them gives a good idea to find de optimal sequence of movements. Note that the idea is not to find an algorithm that orders the sequence, but one to tell me the optimal sequence of movements in terms of cost that organizes the sequence, you can make a copy and sort it to analyze the final position of the elements if you want, in fact we may assume that the list contains the numbers from 1 to n, so we know where we want to put each number, we are just concerned with minimizing the total cost of the steps. We tested several greedy approaches but all of them failed, divide and conquer sorting algorithms can’t be used because they swap with no cost portions of the list and our dynamic programing approaches had to consider many cases. The brute force recursive algorithm takes all the possible combinations of movements from i to j and then again all the possible moments of the rest of the element’s, at the end it returns the sequence with less total cost that sorted the list, as you can imagine the cost of this algorithm is brutal and makes it impracticable for more than 8 elements. Our observations: n movements is not necessarily cheaper than n+1 movements (unlike swaps in arrays that are O(1)). There are basically two ways of moving one element from position i to j: one is to move it directly and the other is to move other elements around i in a way that it reaches the position j. At most you make n-1 movements (the untouched element reaches its position alone). If it is the optimal sequence of movements then you didn’t move the same element twice.

    Read the article

  • SQL Database dilemma : Optimize for Querying or Writing?

    - by Harry
    I'm working on a personal project (Search engine) and have a bit of a dilemma. At the moment it is optimized for writing data to the search index and significantly slow for search queries. The DTA (Database Engine Tuning Adviser) recommends adding a couple of Indexed views inorder to speed up search queries. But this is to the detriment of writing new data to the DB. It seems I can't have one without the other! This is obviously not a new problem. What is a good strategy for this issue?

    Read the article

  • Does the compiler optimize the function parameters passed by value?

    - by Naveen
    Lets say I have a function where the parameter is passed by value instead of const-reference. Further, lets assume that only the value is used inside the function i.e. the function doesn't try to modify it. In that case will the compiler will be able to figure out that it can pass the value by const-reference (for performance reasons) and generate the code accordingly? Is there any compiler which does that?

    Read the article

  • Passing values for multi-value parameter in SSRS query string

    - by Andy Xufuris
    I have two reports built using SSRS 2005. The first report is set to navigate to the second when a specific field is clicked. There is a multi-value parameter on the second report. I need to pass multiple values for this parameter in the URL query string when calling this report. Is there a way to pass multiple values for a parameter in the query string of a report? Or can you pass a parameter that will cause the Select All value to be selected?

    Read the article

  • Access SQL query to SELECT from one table and INSERT into another

    - by typoknig
    Below is my query. Access does not like it, giving me the error Syntax error (missing operator) in query expression 'answer WHERE question = 1'. Hopefully you can see what I am trying to do. Please pay particular attention to 3rd, 4th, and 5th lines under the SELECT statement. INSERT INTO Table2 (respondent,1,2,3-1,3-2,3-3,4,5) SELECT respondent, answer WHERE question = 1, answer WHERE question = 2, answer WHERE answer = 'text 1' AND question = 3, answer WHERE answer = 'text 2' AND question = 3, answer WHERE answer = 'text 3' AND question = 3, answer WHERE question = 4, longanswer WHERE question 5 FROM Table1 GROUP BY respondent;

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Reduce Processing Time of accessing databse

    - by medma
    hello all, I m making an app which requires remote databse connection. I want the values in picker from database but when I click on button to invoke picker it takes some time to fetch the values and displaying. Is there any way to do it fast? and also is there any way to reduce the time of transition between 2 views? Thanx

    Read the article

  • Ternary Operators in JavaScript Without an "Else"

    - by Oscar Godson
    I've been using them forever, and I love them. To me they see cleaner and i can scan faster, but ever since I've been using them i've always had to put null in the else conditions that don't have anything. Is there anyway around it? E.g. condition ? x=true : null ; basically, is there a way to do: condition ? x=true; Now it shows up as a syntax error...

    Read the article

  • Java - Optimize finding a string in a list

    - by Mark
    I have an ArrayList of objects where each object contains a string 'word' and a date. I need to check to see if the date has passed for a list of 500 words. The ArrayList could contain up to a million words and dates. The dates I store as integers, so the problem I have is attempting to find the word I am looking for in the ArrayList. Is there a way to make this faster? In python I have a dict and mWords['foo'] is a simple lookup without looping through the whole 1 million items in the mWords array. Is there something like this in java? for (int i = 0; i < mWords.size(); i++) { if ( word == mWords.get(i).word ) { mLastFindIndex = i; return mWords.get(i); } }

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

  • Can a conforming C# compiler optimize away a local (but unused) variable if it is the only strong re

    - by stakx
    The title says it all, but let me explain: void Case_1() { var weakRef = new WeakReference(new object()); GC.Collect(); // <-- doesn't have to be an explicit call; just assume that // garbage collection would occur at this point. if (weakRef.IsAlive) ... } In this code example, I obviously have to plan for the possibility that the new'ed object is reclaimed by the garbage collector; therefore the if statement. (Note that I'm using weakRef for the sole purpose of checking if the new'ed object is still around.) void Case_2() { var unusedLocalVar = new object(); var weakRef = new WeakReference(unusedLocalVar); GC.Collect(); // <-- doesn't have to be an explicit call; just assume that // garbage collection would occur at this point. Debug.Assert(weakReferenceToUseless.IsAlive); } The main change in this code example from the previous one is that the new'ed object is strongly referenced by a local variable (unusedLocalVar). However, this variable is never used again after the weak reference (weakRef) has been created. Question: Is a conforming C# compiler allowed to optimize the first two lines of Case_2 into those of Case_1 if it sees that unusedLocalVar is only used in one place, namely as an argument to the WeakReference constructor? i.e. is there any possibility that the assertion in Case_2 could ever fail?

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • Average of a Sum in Mysql query

    - by chupeman
    I am having some problems creating a query that gives me the average of a sum. I read a few examples here in stackoverflow and couldn't do it. Can anyone help me to understand how to do this please? This is the data I have: Basically I need the average transaction value by cashier. I can't run a basic avg because it will take all rows but each transaction can have multiple rows. At the end I want to have: Cashier| Average| 131 | 44.31 |(Which comes from the sum divided by 3 transactions not 5 rows) 130 | 33.15 | etc. This is the query I have to SUM the transactions but don't know how or where to include the AVG function. SELECT `products`.`Transaction_x0020_Number`, Sum(`products`.`Sales_x0020_Value`) AS `SUM of Sales_x0020_Value`, `products`.`Cashier` FROM `products` GROUP BY `products`.`Transaction_x0020_Number`, `products`.`Date`, `products`.`Cashier` HAVING (`products`.`Date` ={d'2010-06-04'}) Any help is appreciated.

    Read the article

  • [Python] Tips for making a fraction calculator code more optimized (faster and using less memory)

    - by Logic Named Joe
    Hello Everyone, Basicly, what I need for the program to do is to act a as simple fraction calculator (for addition, subtraction, multiplication and division) for the a single line of input, for example: -input: 1/7 + 3/5 -output: 26/35 My initial code: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': licz = a * d + b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '-': licz= a * d - b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '*': licz= a * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) else: licz= a * d mian= b * c nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) Which I reduced to: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': print("/".join([str((a * d + b * c)/euclid(a * d + b * c, b * d)),str((b * d)/euclid(a * d + b * c, b * d))])) elif wyjscie[1] == '-': print("/".join([str((a * d - b * c)/euclid(a * d - b * c, b * d)),str((b * d)/euclid(a * d - b * c, b * d))])) elif wyjscie[1] == '*': print("/".join([str((a * c)/euclid(a * c, b * d)),str((b * d)/euclid(a * c, b * d))])) else: print("/".join([str((a * d)/euclid(a * d, b * c)),str((b * c)/euclid(a * d, b * c))])) Any advice on how to improve this futher is welcome. Edit: one more thing that I forgot to mention - the code can not make use of any libraries apart from sys.

    Read the article

  • How to make this JavaScript much faster?

    - by Ralph
    Still trying to answer this question, and I think I finally found a solution, but it runs too slow. var $div = $('<div>') .css({ 'border': '1px solid red', 'position': 'absolute', 'z-index': '65535' }) .appendTo('body'); $('body *').live('mousemove', function(e) { var topElement = null; $('body *').each(function() { if(this == $div[0]) return true; var $elem = $(this); var pos = $elem.offset(); var width = $elem.width(); var height = $elem.height(); if(e.pageX > pos.left && e.pageY > pos.top && e.pageX < (pos.left + width) && e.pageY < (pos.top + height)) { var zIndex = document.defaultView.getComputedStyle(this, null).getPropertyValue('z-index'); if(zIndex == 'auto') zIndex = $elem.parents().length; if(topElement == null || zIndex > topElement.zIndex) { topElement = { 'node': $elem, 'zIndex': zIndex }; } } }); if(topElement != null ) { var $elem = topElement.node; $div.offset($elem.offset()).width($elem.width()).height($elem.height()); } }); It basically loops through all the elements on the page and finds the top-most element beneath the cursor. Is there maybe some way I could use a quad-tree or something and segment the page so the loop runs faster?

    Read the article

  • Please help me optimize my Python code

    - by Haidon
    Beginner here! Forgive me in advance for raising what is probably an incredibly simple problem. I've been trying to put together a Python script that runs multiple find-and-replace actions and a few similar things on a specified plain-text file. It works, but from a programming perspective I doubt it works well. How would I best go about optimizing the actions made upon the 'outtext' variable? At the moment it's basically doing a very similar thing four times over... import binascii import re import struct import sys infile = sys.argv[1] charenc = sys.argv[2] outFile=infile+'.tex' findreplace = [ ('TERM1', 'TERM2'), ('TERM3', 'TERM4'), ('TERM5', 'TERM6'), ] inF = open(infile,'rb') s=unicode(inF.read(),charenc) inF.close() # THIS IS VERY MESSY. for couple in findreplace: outtext=s.replace(couple[0],couple[1]) s=outtext for couple in findreplace: outtext=re.compile('Title: (.*)', re.I).sub(r'\\title'+ r'{\1}', s) s=outtext for couple in findreplace: outtext=re.compile('Author: (.*)', re.I).sub(r'\\author'+ r'{\1}', s) s=outtext for couple in findreplace: outtext=re.compile('Date: (.*)', re.I).sub(r'\\date'+ r'{\1}', s) s=outtext # END MESSY SECTION. outF = open(outFile,'wb') outF.write(outtext.encode('utf-8')) outF.close()

    Read the article

  • C++ Reserve Memory Space

    - by uray
    is there any way to reserve memory space to be used later by default Windows Memory Manager so that my application won't run out of memory if my program don't use space more than I have reserved at start of my program?

    Read the article

  • Efficiently draw a grid in Windows Forms

    - by Joel
    I'm writing an implementation of Conway's Game of Life in C#. This is the code I'm using to draw the grid, it's in my panel_Paint event. g is the graphics context. for (int y = 0; y < numOfCells * cellSize; y += cellSize) { for (int x = 0; x < numOfCells * cellSize; x += cellSize) { g.DrawLine(p, x, 0, x, y + numOfCells * cellSize); g.DrawLine(p, 0, x, y + size * drawnGrid, x); } } When I run my program, it is unresponsive until it finishes drawing the grid, which takes a few seconds at numOfCells = 100 & cellSize = 10. Removing all the multiplication makes it faster, but not by very much. Is there a better/more efficient way to draw my grid? Thanks

    Read the article

  • Need MYSQL query for finding lowest score per game player

    - by Chris Barnhill
    I have a game on Facebook called Rails Across Europe. I have a Best Scores page where I show the players with the best 20 scores, which in game terms refers to the lowest winning turn. The problem is that there are a small number of players who play frequently, and their scores dominate the page. I'd like to make the scores page open to more players. So I thought that I could display the single lowest winning turn for each player instead of displaying all of the lowest winning turns for all players. The problem is that the query for this eludes me. So I hope that one of you brilliant StackOverflow folks can help me with this. I have included the relevant MYSQL table schemas below. Here are the the table relationships: player_stats contains statistics for either a game in progress or a completed game. If a game is in progress, winning_turn is zero (which means that games with a winning_turn of zero should not be included in the query). player_stats has a game_player table id reference. game_player contains data describing games currently in progress. game_player has a player table id reference. player contains data describing a person who plays the game. Here's the query I'm currently using: 'SELECT p.fb_user_id, ps.winning_turn, gp.difficulty_level, c.name as city_name, g.name as goods_name, d.cost FROM game_player as gp, player as p, player_stats as ps, demand as d, city as c, goods as g WHERE p.status = "ACTIVE" AND gp.player_id = p.id AND ps.game_player_id = gp.id AND d.id = ps.highest_demand_id AND c.id = d.city_id AND g.id = d.goods_id AND ps.winning_turn > 0 ORDER BY ps.winning_turn ASC, d.cost DESC LIMIT '.$limit.';'; Here are the relevant table schemas: -- -- Table structure for table `player_stats` -- CREATE TABLE IF NOT EXISTS `player_stats` ( `id` int(11) NOT NULL auto_increment, `game_player_id` int(11) NOT NULL, `winning_turn` int(11) NOT NULL, `highest_demand_id` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `game_player_id` (`game_player_id`,`highest_demand_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3814 ; -- -- Table structure for table `game_player` -- CREATE TABLE IF NOT EXISTS `game_player` ( `id` int(10) unsigned NOT NULL auto_increment, `game_id` int(10) unsigned NOT NULL, `player_id` int(10) unsigned NOT NULL, `player_number` int(11) NOT NULL, `funds` int(10) unsigned NOT NULL, `turn` int(10) unsigned NOT NULL, `difficulty_level` enum('STANDARD','ADVANCED','MASTER','ULTIMATE') NOT NULL, `date_last_used` datetime NOT NULL, PRIMARY KEY (`id`), KEY `game_id` (`game_id`,`player_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3814 ; -- -- Table structure for table `player` -- CREATE TABLE IF NOT EXISTS `player` ( `id` int(11) NOT NULL auto_increment, `fb_user_id` char(255) NOT NULL, `fb_proxied_email` text NOT NULL, `first_name` char(255) NOT NULL, `last_name` char(255) NOT NULL, `birthdate` date NOT NULL, `date_registered` datetime NOT NULL, `date_last_logged_in` datetime NOT NULL, `status` enum('ACTIVE','SUSPENDED','CLOSED') NOT NULL, PRIMARY KEY (`id`), KEY `fb_user_id` (`fb_user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1646 ;

    Read the article

  • How to speed-up a simple method (preferably without changing interfaces or data structures)?

    - by baol
    I have some data structures: all_unordered_m is a big vector containing all the strings I need (all different) ordered_m is a small vector containing the indexes of a subset of the strings (all different) in the former vector position_m maps the indexes of objects from the first vector to their position in the second one. The string_after(index, reverse) method returns the string referenced by ordered_m after all_unordered_m[index]. ordered_m is considered circular, and is explored in natural or reverse order depending on the second parameter. The code is something like the following: struct ordered_subset { // [...] std::vector<std::string>& all_unordered_m; // size = n >> 1 std::vector<size_t> ordered_m; // size << n std::tr1::unordered_map<size_t, size_t> position_m; const std::string& string_after(size_t index, bool reverse) const { size_t pos = position_m.find(index)->second; if(reverse) pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1); else pos = (pos == ordered.size() - 1 ? 0 : pos + 1); return all_unordered_m[ordered_m[pos]]; } }; Given that: I do need all of the data-structures for other purposes; I cannot change them because I need to access the strings: by their id in the all_unordered_m; by their index inside the various ordered_m; I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector; I cannot change the string_after interface without changing most of the program. How can I speed up the string_after method that is called billions of times and is eating up about 10% of the execution time?

    Read the article

  • SQL Query That Should Return Least two days record

    - by Aryans
    I have a table "abc" where i store timestamp having multiple records let suppose 1334034000 Date:10-April-2012 1334126289 Date:11-April-2012 1334291399 Date:13-April-2012 I want to build a sql query where I can find at first attempt the records having last two day values and so second time the next two days . . . Example: Select *,dayofmonth(FROM_UNIXTIME(i_created)) from notes where dayofmonth(FROM_UNIXTIME(i_created)) > dayofmonth(FROM_UNIXTIME(i_created)) -2 order by dayofmonth(FROM_UNIXTIME(i_created)) this query returns all the records date wise but we need very most two days record. Please suggest accordingly. Thanks in advance

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >