Search Results

Search found 3438 results on 138 pages for 'nonlinear optimization'.

Page 52/138 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Efficient code to avoid circular references in c# object model

    - by Kumar
    I have an excel like grid where values can be typed referencing other rows To check for circular references when a new value is entered, i traverse the tree and create a list of values referenced thus far, if the current value is found in this list, i return an error thus avoiding a circular reference. This is infrequent enough where extreme performance is not an issue but... Question - is there a better way ? I'm told it's not the most optimal but no answer was provided so on to the experts @ SO :)

    Read the article

  • Why better isolation level means better performance in MS SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead.

    Read the article

  • speeding up website load using multiple servers/domains

    - by Mohammad
    When Yahoo! developer guide says "Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective". And as an explanation I read somewhere, that browsers will load up to 5 things simultaneously from the same domain. Would a subdomain, for example cdn.example.com be considered a new domain, in the previous statement?

    Read the article

  • C# Sorting Question

    - by betamoo
    I wonder what is the best C# data structure I should use to sort efficiently? Is it List or Array or what? And why the standard array [] does not implement sort method in it? Thanks

    Read the article

  • Efficient algorithm for creating an ideal distribution of groups into containers?

    - by Inshim
    I have groups of students that need to be allocated into classrooms of a fixed capacity (say, 100 chairs in each). Each group must only be allocated to a single classroom, even if it is larger than the capacity (ie there can be an overflow, with students standing up) I need an algorithm to make the allocations with minimum overflows and under-capacity classrooms. A naive algorithm to do this allocation is horrendously slow when having ~200 groups, with a distribution of about half of them being under 20% of the classroom size. Any ideas where I can find at least some good starting point for making this algorithm lightning fast? Thanks!

    Read the article

  • How to measure the time HTTP requests spend sitting in the accept-queue?

    - by David Jones
    I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests. During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued. Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog. Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain. Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue? The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that. thanks in advance, David Jones

    Read the article

  • Create date efficiently

    - by Dave Jarvis
    On Pavel's page is the following function: CREATE OR REPLACE FUNCTION makedate(year int, dayofyear int) RETURNS date AS $$ SELECT (date '0001-01-01' + ($1 - 1) * interval '1 year' + ($2 - 1) * interval '1 day'):: date $$ LANGUAGE sql; I have the following code: makedate(y.year,1) What is the fastest way in PostgreSQL to create a date for January 1st of a given year? Pavel's function would lead me to believe it is: date '0001-01-01' + y.year * interval '1 year' + interval '1 day'; My thought would be more like: to_date( y.year||'-1-1', 'YYYY-MM-DD'); Am looking for the fastest way using PostgreSQL 8.4. (The query that uses the date function can select between 100,000 and 1 million records, so it needs speed.) Thank you!

    Read the article

  • Datastore performance, my code or the datastore latency

    - by fredrik
    I had for the last month a bit of a problem with a quite basic datastore query. It involves 2 db.Models with one referring to the other with a db.ReferenceProperty. The problem is that according to the admin logs the request takes about 2-4 seconds to complete. I strip it down to a bare form and a list to display the results. The put works fine, but the get accumulates (in my opinion) way to much cpu time. #The get look like this: outputData['items'] = {} labelsData = Label.all() for label in labelsData: labelItem = label.item.name if labelItem not in outputData['items']: outputData['items'][labelItem] = { 'item' : labelItem, 'labels' : [] } outputData['items'][labelItem]['labels'].append(label.text) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, outputData)) #And the models: class Item(db.Model): name = db.StringProperty() class Label(db.Model): text = db.StringProperty() lang = db.StringProperty() item = db.ReferenceProperty(Item) I've tried to make it a number of different way ie. instead of ReferenceProperty storing all Label keys in the Item Model as a db.ListProperty. My test data is just 10 rows in Item and 40 in Label. So my questions: Is it a fools errand to try to optimize this since the high cpu usage is due to the problems with the datastore or have I just screwed up somewhere in the code? ..fredrik

    Read the article

  • How to simply this logic/code?

    - by Tattat
    I want to write an apps that accepts user command. The user command is used in this format: command -parameter For example, the app can have "Copy", "Paste", "Delete" command I am thinking the program should work like this : public static void main(String args[]){ if(args[0].equalsIgnoreCase("COPY")){ //handle the copy command } else if(args[0].equalsIgnoreCase("PASTE")){ //handle the copy command }/** code skipped **/ } So, it works, but I think it will become more and more complex when I have more command in my program, also, it is different to read. Any ideas to simply the logic?

    Read the article

  • Simple MySQL Query taking 45 seconds (Gets a record and its "latest" child record)

    - by Brian Lacy
    I have a query which gets a customer and the latest transaction for that customer. Currently this query takes over 45 seconds for 1000 records. This is especially problematic because the script itself may need to be executed as frequently as once per minute! I believe using subqueries may be the answer, but I've had trouble constructing it to actually give me the results I need. SELECT customer.CustID, customer.leadid, customer.Email, customer.FirstName, customer.LastName, transaction.*, MAX(transaction.TransDate) AS LastTransDate FROM customer INNER JOIN transaction ON transaction.CustID = customer.CustID WHERE customer.Email = '".$email."' GROUP BY customer.CustID ORDER BY LastTransDate LIMIT 1000 I really need to get this figured out ASAP. Any help would be greatly appreciated!

    Read the article

  • SQL-query task, decision?

    - by Sirius Lampochkin
    There is a table of currencies rates in MS SQL Server 2005: ID | CURR | RATE | DATE 1   | USD   | 30      | 01.10.2010 3   | GBP   | 45      | 07.10.2010 5   | USD   | 31      | 08.10.2010 7   | GBP   | 46      | 09.10.2010 9   | USD   | 32      | 12.10.2010 11 | GBP   | 48      | 03.10.2010 Rate are updated in real time and there are more than 1 billion rows in the table. It needs to write a SQL-query, wich will provide latest rates per each currency. My decision is: SELECT c.[id],c.[curr],c.[rate],c.[date] FROM [curr_rate] c, (SELECT curr, MAX(date) AS rate_date FROM [curr_rate] GROUP BY curr) t WHERE c.date = t.rate_date AND c.curr = t.curr ORDER BY c.[curr] ASC Is it possible to write a query without sub-queries and join's with derived tables?

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • Representing game states in Tic Tac Toe

    - by dacman
    The goal of the assignment that I'm currently working on for my Data Structures class is to create a of Quantum Tic Tac Toe with an AI that plays to win. Currently, I'm having a bit of trouble finding the most efficient way to represent states. Overview of current Structure: AbstractGame Has and manages AbstractPlayers (game.nextPlayer() returns next player by int ID) Has and intializes AbstractBoard at the beginning of the game Has a GameTree (Complete if called in initialization, incomplete otherwise) AbstractBoard Has a State, a Dimension, and a Parent Game Is a mediator between Player and State, (Translates States from collections of rows to a Point representation Is a StateConsumer AbstractPlayer Is a State Producer Has a ConcreteEvaluationStrategy to evaluate the current board StateTransveralPool Precomputes possible transversals of "3-states". Stores them in a HashMap, where the Set contains nextStates for a given "3-state" State Contains 3 Sets -- a Set of X-Moves, O-Moves, and the Board Each Integer in the set is a Row. These Integer values can be used to get the next row-state from the StateTransversalPool SO, the principle is Each row can be represented by the binary numbers 000-111, where 0 implies an open space and 1 implies a closed space. So, for an incomplete TTT board: From the Set<Integer> board perspective: X_X R1 might be: 101 OO_ R2 might be: 110 X_X R3 might be: 101, where 1 is an open space, and 0 is a closed space From the Set<Integer> xMoves perspective: X_X R1 might be: 101 OO_ R2 might be: 000 X_X R3 might be: 101, where 1 is an X and 0 is not From the Set<Integer> oMoves perspective: X_X R1 might be: 000 OO_ R2 might be: 110 X_X R3 might be: 000, where 1 is an O and 0 is not Then we see that x{R1,R2,R3} & o{R1,R2,R3} = board{R1,R2,R3} The problem is quickly generating next states for the GameTree. If I have player Max (x) with board{R1,R2,R3}, then getting the next row-states for R1, R2, and R3 is simple.. Set<Integer> R1nextStates = StateTransversalPool.get(R1); The problem is that I have to combine each one of those states with R1 and R2. Is there a better data structure besides Set that I could use? Is there a more efficient approach in general? I've also found Point<-State mediation cumbersome. Is there another approach that I could try there? Thanks! Here is the code for my ConcretePlayer class. It might help explain how players produce new states via moves, using the StateProducer (which might need to become StateFactory or StateBuilder). public class ConcretePlayerGeneric extends AbstractPlayer { @Override public BinaryState makeMove() { // Given a move and the current state, produce a new state Point playerMove = super.strategy.evaluate(this); BinaryState currentState = super.getInGame().getBoard().getState(); return StateProducer.getState(this, playerMove, currentState); } } EDIT: I'm starting with normal TTT and moving to Quantum TTT. Given the framework, it should be as simple as creating several new Concrete classes and tweaking some things.

    Read the article

  • Cannot sort a row of size 8130, which is greater than the allowable maximum of 8094

    - by Sri Kumar
    Hello All, SELECT DISTINCT tblJobReq.JobReqId, tblJobReq.JobStatusId, tblJobClass.JobClassId, tblJobClass.Title, tblJobReq.JobClassSubTitle, tblJobAnnouncement.JobClassDesc, tblJobAnnouncement.EndDate, tblJobAnnouncement.AgencyMktgVerbage, tblJobAnnouncement.SpecInfo, tblJobAnnouncement.Benefits, tblSalary.MinRateSal, tblSalary.MaxRateSal, tblSalary.MinRateHour, tblSalary.MaxRateHour, tblJobClass.StatementEval, tblJobReq.ApprovalDate, tblJobReq.RecruiterId, tblJobReq.AgencyId FROM ((tblJobReq LEFT JOIN tblJobAnnouncement ON tblJobReq.JobReqId =tblJobAnnouncement.JobReqId) INNER JOIN tblJobClass ON tblJobReq.JobClassId = tblJobClass.JobClassId) LEFT JOIN tblSalary ON tblJobClass.SalaryCode = tblSalary.SalaryCode WHERE (tblJobReq.JobClassId in (SELECT JobClassId from tblJobClass WHERE tblJobClass.Title like '%Family Therapist%')) When i try to execute the query it results in the following error. Cannot sort a row of size 8130, which is greater than the allowable maximum of 8094 I checked and didn't find any solution. The only way is to truncate (substring())the "tblJobAnnouncement.JobClassDesc" in the query which has column size of around 8000. Do we have any work around so that i need not truncate the values. Or Can this query be optimised? Any setting in SQL Server 2000?

    Read the article

  • MYSQL OR vs IN performance

    - by Scott
    I am wondering if there is any difference in regards to performance between the following SELECT ... FROM ... WHERE someFIELD IN(1,2,3,4) SELECT ... FROM ... WHERE someFIELD between 0 AND 5 SELECT ... FROM ... WHERE someFIELD = 1 OR someFIELD = 2 OR someFIELD = 3 ... or will MySQL optimize the SQL in the same way compilers will optimize code ? EDIT: Changed the AND's to OR's for the reason stated in the comments.

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • Require help in Writing Query

    - by harigm
    The following image have been uploaded to show what I am trying to do and what I wanted out of it Can any one help me write the Query to get the results what I want Please check the following SELECT * FROM KPT WHERE PROPERTY_ID IN (SELECT PROPERTY_ID FROM khata_header WHERE DIV_ID = 3 and RECORD_STATUS = 0) and CHALLAN_NO > 42646 The above is the query I have written and I have got the following result set ID CHALLAN_NO PROPERTY_ID SITE_NO TOTAL_AMOUNT ----- ------------- -------------- ------------------- --------------- 1242 42757 3103010141 296 595 1243 63743 3204190257 483 594 1244 63743 3204190257 483 594 1334 43395 3217010223 1088 576 1421 524210 3320050416 (null) (null) 1422 524210 3320050416 (null) (null) 1560 564355 3320021408 (null) (null) 1870 516292 3320040420 (null) (null) 1940 68357 3217100104 139 1153 1941 68357 3217100104 139 1153 2002 56256 3320100733 511 4430 2003 56256 3320100733 511 4430 2004 66488 3217040869 293 3094 2005 66488 3217040869 293 3094 2016 64571 3217040374 (null) (null) 2036 523122 3320020352 (null) (null) 2039 65682 3217040021 273 919 In my resultset, I am getting the PropertyId repeated, since there are multilple entries, How Can I know How many have been repeated What are those Property Id which have repeated more than 2 times. Little Back ground about the tables are PROPERTY_ID is the FK in the KPT PROPERTY_ID is the PK in KH I am writing a subquery to get the Result, so I am stuck I dont know how to get my results Please help

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • Permutations of Varying Size

    - by waiwai933
    I'm trying to write a function in PHP that gets all permutations of all possible sizes. I think an example would be the best way to start off: $my_array = array(1,1,2,3); Possible permutations of varying size: 1 1 // * See Note 2 3 1,1 1,2 1,3 // And so forth, for all the sets of size 2 1,1,2 1,1,3 1,2,1 // And so forth, for all the sets of size 3 1,1,2,3 1,1,3,2 // And so forth, for all the sets of size 4 Note: I don't care if there's a duplicate or not. For the purposes of this example, all future duplicates have been omitted. What I have so far in PHP: function getPermutations($my_array){ $permutation_length = 1; $keep_going = true; while($keep_going){ while($there_are_still_permutations_with_this_length){ // Generate the next permutation and return it into an array // Of course, the actual important part of the code is what I'm having trouble with. } $permutation_length++; if($permutation_length>count($my_array)){ $keep_going = false; } else{ $keep_going = true; } } return $return_array; } The closest thing I can think of is shuffling the array, picking the first n elements, seeing if it's already in the results array, and if it's not, add it in, and then stop when there are mathematically no more possible permutations for that length. But it's ugly and resource-inefficient. Any pseudocode algorithms would be greatly appreciated. Also, for super-duper (worthless) bonus points, is there a way to get just 1 permutation with the function but make it so that it doesn't have to recalculate all previous permutations to get the next? For example, I pass it a parameter 3, which means it's already done 3 permutations, and it just generates number 4 without redoing the previous 3? (Passing it the parameter is not necessary, it could keep track in a global or static). The reason I ask this is because as the array grows, so does the number of possible combinations. Suffice it to say that one small data set with only a dozen elements grows quickly into the trillions of possible combinations and I don't want to task PHP with holding trillions of permutations in its memory at once.

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • How fast can you make linear search?

    - by Mark Probst
    I'm looking to optimize this linear search: static int linear (const int *arr, int n, int key) { int i = 0; while (i < n) { if (arr [i] >= key) break; ++i; } return i; } The array is sorted and the function is supposed to return the index of the first element that is greater or equal to the key. They array is not large (below 200 elements) and will be prepared once for a large number of searches. Array elements after the n-th can if necessary be initialized to something appropriate, if that speeds up the search. No, binary search is not allowed, only linear search.

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >