Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 182/717 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • My SQL query is only returning I need the parent aswell

    - by sico87
    My sql query is only returning the children of the parent I need it to return the parent as well, public function getNav($cat,$subcat){ //gets all sub categories for a specific category if(!$this->checkValue($cat)) return false; //checks data $query = false; if($cat=='NULL'){ $sql = "SELECT itemID, title, parent, url, description, image FROM p_cat WHERE deleted = 0 AND parent is NULL ORDER BY position;"; $query = $this->db->query($sql) or die($this->db->error); }else{ //die($cat); $sql = "SET @parent = (SELECT c.itemID FROM p_cat c WHERE url = '".$this->sql($cat)."' AND deleted = 0); SELECT c1.itemID, c1.title, c1.parent, c1.url, c1.description, c1.image, (SELECT c2.url FROM p_cat c2 WHERE c2.itemID = c1.parent LIMIT 1) as parentUrl FROM p_cat c1 WHERE c1.deleted = 0 AND c1.parent = @parent ORDER BY c1.position;"; $query = $this->db->multi_query($sql) or die($this->db->error); $this->db->store_result(); $this->db->next_result(); $query = $this->db->store_result(); } return $query; }

    Read the article

  • Lucene setboost doesn't work

    - by Keven
    Hi all, OUr team just upgrade lucene from 2.3 to 3.0 and we are confused about the setboost and getboost of document. What we want is just set a boost for each document when add them into index, then when search it the documents in the response should have different order according to the boost I set. But it seems the order is not changed at all, even the boost of each document in the search response is still 1.0. Could some one give me some hit? Following is our code: String[] a = new String[] { "schindler", "spielberg", "shawshank", "solace", "sorcerer", "stone", "soap", "salesman", "save" }; List strings = Arrays.asList(a); AutoCompleteIndex index = new Index(); IndexWriter writer = new IndexWriter(index.getDirectory(), AnalyzerFactory.createAnalyzer("en_US"), true, MaxFieldLength.LIMITED); float i = 1f; for (String string : strings) { Document doc = new Document(); Field f = new Field(AutoCompleteIndexFactory.QUERYTEXTFIELD, string, Field.Store.YES, Field.Index.NOT_ANALYZED); doc.setBoost(i); doc.add(f); writer.addDocument(doc); i += 2f; } writer.close(); IndexReader reader2 = IndexReader.open(index.getDirectory()); for (int j = 0; j < reader2.maxDoc(); j++) { if (reader2.isDeleted(j)) { continue; } Document doc = reader2.document(j); Field f = doc.getField(AutoCompleteIndexFactory.QUERYTEXTFIELD); System.out.println(f.stringValue() + ":" + f.getBoost() + ", docBoost:" + doc.getBoost()); doc.setBoost(j); }

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • How to give points for each indices of list

    - by Eric Jung
    def voting_borda(rank_ballots): '''(list of list of str) -> tuple of (str, list of int) The parameter is a list of 4-element lists that represent rank ballots for a single riding. The Borda Count is determined by assigning points according to ranking. A party gets 3 points for each first-choice ranking, 2 points for each second-choice ranking and 1 point for each third-choice ranking. (No points are awarded for being ranked fourth.) For example, the rank ballot shown above would contribute 3 points to the Liberal count, 2 points to the Green count and 1 point to the CPC count. The party that receives the most points wins the seat. Return a tuple where the first element is the name of the winning party according to Borda Count and the second element is a four-element list that contains the total number of points for each party. The order of the list elements corresponds to the order of the parties in PARTY_INDICES.''' #>>> voting_borda([['GREEN','NDP', 'LIBERAL', 'CPC'], ['GREEN','CPC','LIBERAL','NDP'], ['LIBERAL','NDP', 'CPC', 'GREEN']]) #('GREEN',[4, 6, 5, 3]) list_of_party_order = [] for sublist in rank_ballots: for party in sublist[0]: if party == 'GREEN': GREEN_COUNT += 3 elif party == 'NDP': NDP_COUNT += 3 elif party == 'LIBERAL': LIBERAL_COUNT += 3 elif party == 'CPC': CPC_COUNT += 3 for party in sublist[1]: if party == 'GREEN': GREEN_COUNT += 2 elif party == 'NDP': NDP_COUNT += 2 elif party == 'LIBERAL': LIBERAL_COUNT += 2 elif party == 'CPC': CPC_COUNT += 2 for party in sublist[2]: if party == 'GREEN': GREEN_COUNT += 1 elif party == 'NDP': NDP_COUNT += 1 elif party == 'LIBERAL': LIBERAL_COUNT += 1 elif party == 'CPC': CPC_COUNT += 1 I don't know how I would give points for each indices of the list MORE SIMPLY. Can someone please help me? Without being too complicated. Thank you!

    Read the article

  • Which database I can used and relationship in it ??

    - by mimo-hamad
    My projece make me confused which I didn't find clear things that make me understand the required database and the relationships in it So, would a super one help me to solve it ?!! ;D this is required: 1) Model the data stored in the database (Identify the entities, roles, relationships, constraints, etc.) 2) Write the Oracle commands to create the database, find appropriate data, and populate the database 3) Write five different queries on your database, using the SELECT/FROM/WHERE construct provided in SQL. Your five queries should illustrate several different aspects of database querying, such as: a. Queries over more than one relation (by listing more than one relation in the FROM clause) b. Queries involving aggregate functions, such as SUM, COUNT, and AVG c. Queries involving complicated selects and joins d. Queries involving GROUP BY, HAVING or other similar functions. e. Queries that require the use of the DISTINCT keyword. And this the condition that we need to determine it to solve the required Q's above : 5) It is desired to develop an Internet membership club to buy products at special prices online. To join, new members must be referred by another existing member of the club. The system will keep the following information for each member: The member ID, referring member, birth date, member name, address, phone, mobile, credit card type, number and expiration date. The items are always shipped to the member's address noted in the membership application. The shipping fees will differ for each order.For each item to be requested, the member will select an item from a long list of possible items. For each item in the database, we store an item ID, an item name, description, and list price. The list price will be different from the actual sale price. The available quantity and the back-ordered quantity (the back-ordered quantity is the quantity on-order by the club from its suppliers) is also noted

    Read the article

  • Iterating through Event Log Entry Collection, IndexOutOutOfBoundsException

    - by fjdumont
    Hello, in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message. In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´. Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...). How can I iterate through the collection without trying to access already deleted entries? Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help? Thanks for any advice and hints Frank Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?

    Read the article

  • Join Where Rows Don't Exist or Where Criteria Matches...?

    - by Greg
    I'm trying to write a query to tell me which orders have valid promocodes. Promocodes are only valid between certain dates and optionally certain packages. I'm having trouble even explaining how this works (see psudo-ish code below) but basically if there are packages associated with a promocode then the order has to have one of those packages and be within a valid date range otherwise it just has to be in a valid date range. The whole "if PrmoPackage rows exist" thing is really throwing me off and I feel like I should be able to do this without a whole bunch of Unions. (I'm not even sure if that would make it easier at this point...) Anybody have any ideas for the query? if `OrderPromoCode` = `PromoCode` then if `OrderTimestamp` is between `PromoStartTimestamp` and `PromoEndTimestamp` then if `PromoCode` has packages associated with it //yes then if `PackageID` is one of the specified packages //yes code is valid //no invalid //no code is valid Order: OrderID* | OrderTimestamp | PackageID | OrderPromoCode 1 | 1/2/11 | 1 | ABC 2 | 1/3/11 | 2 | ABC 3 | 3/2/11 | 2 | DEF 4 | 4/2/11 | 3 | GHI Promo: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* ABC | 1/1/11 | 2/1/11 ABC | 3/1/11 | 4/1/11 DEF | 1/1/11 | 1/11/13 GHI | 1/1/11 | 1/11/13 PromoPackage: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* | PackageID* ABC | 1/1/11 | 2/1/11 | 1 ABC | 1/1/11 | 2/1/11 | 3 GHI | 1/1/11 | 1/11/13 | 1 Desired Result: OrderID | IsPromoCodeValid 1 | 1 2 | 0 3 | 1 4 | 0

    Read the article

  • How can I pass in a params of Expression<Func<T, object>> to a method?

    - by Pure.Krome
    Hi folks, I have the following two methods :- public static IQueryable<T> IncludeAssociations<T>(this IQueryable<T> source, params string[] associations) { ... } public static IQueryable<T> IncludeAssociations<T>(this IQueryable<T> source, params Expression<Func<T, object>>[] expressions) { ... } Now, when I try and pass in a params of Expression<Func<T, object>>[], it always calls the first method (the string[]' and of course, that value isNULL`) Eg. Expression<Func<Order, object>> x1 = x => x.User; Expression<Func<Order, object>> x2 = x => x.User.Passport; var foo = _orderRepo .Find() .IncludeAssociations(new {x1, x2} ) .ToList(); Can anyone see what I've done wrong? Why is it thinking my params are a string? Can I force the type, of the 2x variables?

    Read the article

  • Silverlight - C# and LINQ - Ordering By Multiple Fields

    - by user70192
    Hello, I have an ObservableCollection of Task objects. Each Task has the following properties: AssignedTo Category Title AssignedDate I'm giving the user an interface to select which of these fields they was to sort by. In some cases, a user may want to sort by up to three of these properties. How do I dynamically build a LINQ statement that will allow me to sort by the selected fields in either ascending or descending order? Currently, I'm trying the following, but it appears to only sort by the last sort applied: var sortedTasks = from task in tasks select task; if (userWantsToSortByAssignedTo == true) { if (sortByAssignedToDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.AssignedTo); else sortedTasks = sortedTasks.OrderBy(t => t.AssignedTo); } if (userWantsToSortByCategory == true) { if (sortByCategoryDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.Category); else sortedTasks = sortedTasks.OrderBy(t => t.Category); } Is there an elegant way to dynamicaly append order clauses to a LINQ statement?

    Read the article

  • Matab - Trace contour line between two different points

    - by Graham
    Hi, I have a set of points represented as a 2 row by n column matrix. These points make up a connected boundary or edge. I require a function that traces this contour from a start point P1 and stop at an end point P2. It also needs to be able trace the contour in a clockwise or anti-clockwise direction. I was wondering if this can be achieved by using some of matlabs functions. I have tried to write my own function but this was riddled with bugs and I have also tried using bwtraceboundary and indexing however this has problematic results as the points within the matrix are not in the order that create the contour. Thank you in advance for any help. Btw, I have included a link to a plot of the set of points. It is half the outline of a hand. The function would ideally trace the contour from ether the red star to the green triangle. Returning the points in order of traversal.

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • Rendering blocks side-by-side with FOP

    - by Rolf
    I need to generate a PDF from XML data using Apache FOP. The problem is that FOP doesn't support fo:float, and I really need to have items (boxes of rendered data) side-by-side in the PDF. More precisely, I need them in a 4x4 grid on each page, like so: In HTML, I would simply render these as left-floated divs with appropriate widths and heights. My data looks something like this: <item id="1"> <a>foo</a> <b>bar</b> <c>baz</c> </item> <item id="2">...</item> ... <item id="n">...</item> I considered using a two-column region-body, but then the order of items would be 1, 3, 2, 4 (reading from left to right) since they would be rendered tb-lr instead of lr-tb, and I need them to be in the correct order (id in above xml). I suppose I could try using a table, but I'm not quite sure how to group the items into table rows. So, some kind of workaround for the lack of fo:float would be greatly appreciated.

    Read the article

  • JQuery tablesorter - Second click on column header doesn't resort

    - by Jonathan
    I'm using tablesorter in on a table I added to a view in django's admin (although I'm not sure this is relevant). I'm extending the html's header: {% block extrahead %} <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.js"></script> <script type="text/javascript" src="http://mysite.com/media/tablesorter/jquery.tablesorter.js"></script> <script type="text/javascript"> $(document).ready(function() { $("#myTable").tablesorter(); } ); </script> {% endblock %} When I click on a column header, it sorts the table using this column in descending order - that's ok. When I click the same column header a second time - it does not reorder to ascending order. What's wrong with it? the table's html looks like: <table id="myTable" border="1"> <thead> <tr> <th>column_name_1</th> <th>column_name_2</th> <th>column_name_3</th> </tr> </thead> <tbody> {% for item in extra.items %} <tr> <td>{{ item.0|safe }} </td> <td>{{ item.1|safe }} </td> <td>{{ item.2|safe }} </td> </tr> {% endfor %} </tbody> </table>

    Read the article

  • How to populate Range variable from a Sub/Function call?

    - by Ken Ingram
    I am trying to get this sub to work but the operationalRange variable is not being assigned. Despite the fact that the function selectBodyRow(bodyName) works fine. Sub sortRows(bodyName As String, ByRef wksht As Worksheet) Dim operationalRange As Range Set operationalRange = selectBodyRow(bodyName) Debug.Print "Sorting Worksheet: " & wksht.Name If Not operationalRange Is Nothing Then operationalRange.Select Debug.Print "Sorting " & operationalRange.Count & "Rows." ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Clear ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal With ActiveWorkbook.Worksheets(wksht.Name).Sort .SetRange operationalRange .Header = xlGuess .MatchCase = False .Orientation = xlTopToBottom .SortMethod = xlPinYin .Apply End With Else MsgBox "Body is not being Set" End If End Sub The Sub being called by the above Sub is: Function selectBodyRow(bodyName As String) As Range Dim rangeStart As String, rangeEnd As String Dim selectionStart As Range, selectionEnd As Range Dim result As Range, srchRng As Range, cngrs As Variant If bodyName = "WEST" Then rangeStart = "<-WEST START->" rangeEnd = "<-WEST END->" ElseIf bodyName = "EAST" Then rangeStart = "<-EAST START->" rangeEnd = "<-EAST END->" End If Set srchRng = Range("A:A") srchRng.Select Set selectionStart = srchRng.Find(What:=rangeStart, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set selectionEnd = srchRng.Find(What:=rangeEnd, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set result = Range(selectionStart.Offset(1, 0), selectionEnd.Offset(-1, 0)) result.EntireRow.Select End Function

    Read the article

  • Error in merging two sequences of timestamps to yield strings

    - by AruniRC
    The code sorts two input sequences - seq01 and seq02 - on the basis of their timestamp values and returns a sequence that denotes which sequence is to be read for the values to be in order. For cases where seq02's timestamp value is lesser than seq01's timestamp value we yield a "2" to the sequence being returned, else a "1". These denote whether at that point seq01 is to be taken or seq02 is to be taken for the data to be in order (by timestamp value). let mergeSeq (seq01:seq<_>) (seq02:seq<_>) = seq { use iter01 = seq01.GetEnumerator() use iter02 = seq02.GetEnumerator() while iter01.MoveNext() do let _,_,time01 = iter01.Current let _,_,time02 = iter02.Current while time02 < time01 && iter02.MoveNext() do yield "2" yield "1" } To test it in the FSI created two sequences a and b, a={1;3;5;...} and b={0;2;4;...}. So the expected values for let c = mergeSeq a b would have been {"2","1","2","1"...}. However I am getting this error: error FS0001: The type ''a * 'b * 'c' does not match the type 'int' EDIT After correcting: let mergeSeq (seq01:seq<_>) (seq02:seq<_>) = seq { use iter01 = seq01.GetEnumerator() use iter02 = seq02.GetEnumerator() while iter01.MoveNext() do let time01 = iter01.Current let time02 = iter02.Current while time02 < time01 && iter02.MoveNext() do yield "2" yield "1" } After running this, there's another error: call MoveNext. Somehow the iteration is not being performed.

    Read the article

  • MySQL SELECT combining 3 SELECTs INTO 1

    - by Martin Tóth
    Consider following tables in MySQL database: entries: creator_id INT entry TEXT is_expired BOOL other: creator_id INT entry TEXT userdata: creator_id INT name VARCHAR etc... In entries and other, there can be multiple entries by 1 creator. userdata table is read only for me (placed in other database). I'd like to achieve a following SELECT result: +------------+---------+---------+-------+ | creator_id | entries | expired | other | +------------+---------+---------+-------+ | 10951 | 59 | 55 | 39 | | 70887 | 41 | 34 | 108 | | 88309 | 38 | 20 | 102 | | 94732 | 0 | 0 | 86 | ... where entries is equal to SELECT COUNT(entry) FROM entries GROUP BY creator_id, expired is equal to SELECT COUNT(entry) FROM entries WHERE is_expired = 0 GROUP BY creator_id and other is equal to SELECT COUNT(entry) FROM other GROUP BY creator_id. I need this structure because after doing this SELECT, I need to look for user data in the "userdata" table, which I planned to do with INNER JOIN and select desired columns. I solved this problem with selecting "NULL" into column which does not apply for given SELECT: SELECT creator_id, COUNT(any_entry) as entries, COUNT(expired_entry) as expired, COUNT(other_entry) as other FROM ( SELECT creator_id, entry AS any_entry, NULL AS expired_entry, NULL AS other_enry FROM entries UNION SELECT creator_id, NULL AS any_entry, entry AS expired_entry, NULL AS other_enry FROM entries WHERE is_expired = 1 UNION SELECT creator_id, NULL AS any_entry, NULL AS expired_entry, entry AS other_enry FROM other ) AS tTemp GROUP BY creator_id ORDER BY entries DESC, expired DESC, other DESC ; I've left out the INNER JOIN and selecting other columns from userdata table on purpose (my question being about combining 3 SELECTs into 1). Is my idea valid? = Am I trying to use the right "construction" for this? Are these kind of SELECTs possible without creating an "empty" column? (some kind of JOIN) Should I do it "outside the DB": make 3 SELECTs, make some order in it (let's say python lists/dicts) and then do the additional SELECTs for userdata? Solution for a similar question does not return rows where entries and expired are 0. Thank you for your time.

    Read the article

  • Moving to an arbitrary position in a file in Python

    - by B Rivera
    Let's say that I routinely have to work with files with an unknown, but large, number of lines. Each line contains a set of integers (space, comma, semicolon, or some non-numeric character is the delimiter) in the closed interval [0, R], where R can be arbitrarily large. The number of integers on each line can be variable. Often times I get the same number of integers on each line, but occasionally I have lines with unequal sets of numbers. Suppose I want to go to Nth line in the file and retrieve the Kth number on that line (and assume that the inputs N and K are valid --- that is, I am not worried about bad inputs). How do I go about doing this efficiently in Python 3.1.2 for Windows? I do not want to traverse the file line by line. I tried using mmap, but while poking around here on SO, I learned that that's probably not the best solution on a 32-bit build because of the 4GB limit. And in truth, I couldn't really figure out how to simply move N lines away from my current position. If I can at least just "jump" to the Nth line then I can use .split() and grab the Kth integer that way. The nuance here is that I don't just need to grab one line from the file. I will need to grab several lines: they are not necessarily all near each other, the order in which I get them matters, and the order is not always based on some deterministic function. Any ideas? I hope this is enough information. Thanks!

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • TicTacToe strategic reduction

    - by NickLarsen
    I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go. The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe. In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?

    Read the article

  • How to specify allowed exceptions in WCF's configuration file?

    - by tucaz
    Hello! I´m building a set of WCF services for internal use through all our applications. For exception handling I created a default fault class so I can return treated message to the caller if its the case or a generic one when I have no clue what happened. Fault contract: [DataContract(Name = "DefaultFault", Namespace = "http://fnac.com.br/api/2010/03")] public class DefaultFault { public DefaultFault(DefaultFaultItem[] items) { if (items == null || items.Length== 0) { throw new ArgumentNullException("items"); } StringBuilder sbItems = new StringBuilder(); for (int i = 0; i Specifying that my method can throw this exception so the consuming client will be aware of it: [OperationContract(Name = "PlaceOrder")] [FaultContract(typeof(DefaultFault))] [WebInvoke(UriTemplate = "/orders", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, Method = "POST")] string PlaceOrder(Order newOrder); Most of time we will use just .NET to .NET communication with usual binds and everything works fine since we are talking the same language. However, as you can see in the service contract declaration I have a WebInvoke attribute (and a webHttp binding) in order to be able to also talk JSON since one of our apps will be built for iPhone and this guy will talk JSON. My problem is that whenever I throw a FaultException and have includeExceptionDetails="false" in the config file the calling client will get a generic HTTP error instead of my custom message. I understand that this is the correct behavior when includeExceptionDetails is turned off, but I think I saw some configuration a long time ago to allow some exceptions/faults to pass through the service boundaries. Is there such thing like this? If not, what do u suggest for my case? Thanks a LOT!

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Ad distribution problem: an optimal solution?

    - by Mokuchan
    I'm asked to find a 2 approximate solution to this problem: You’re consulting for an e-commerce site that receives a large number of visitors each day. For each visitor i, where i € {1, 2 ..... n}, the site has assigned a value v[i], representing the expected revenue that can be obtained from this customer. Each visitor i is shown one of m possible ads A1, A2 ..... An as they enter the site. The site wants a selection of one ad for each customer so that each ad is seen, overall, by a set of customers of reasonably large total weight. Thus, given a selection of one ad for each customer, we will define the spread of this selection to be the minimum, over j = 1, 2 ..... m, of the total weight of all customers who were shown ad Aj. Example Suppose there are six customers with values 3, 4, 12, 2, 4, 6, and there are m = 3 ads. Then, in this instance, one could achieve a spread of 9 by showing ad A1 to customers 1, 2, 4, ad A2 to customer 3, and ad A3 to customers 5 and 6. The ultimate goal is to find a selection of an ad for each customer that maximizes the spread. Unfortunately, this optimization problem is NP-hard (you don’t have to prove this). So instead give a polynomial-time algorithm that approximates the maximum spread within a factor of 2. The solution I found is the following: Order visitors values in descending order Add the next visitor value (i.e. assign the visitor) to the Ad with the current lowest total value Repeat This solution actually seems to always find the optimal solution, or I simply can't find a counterexample. Can you find it? Is this a non-polinomial solution and I just can't see it?

    Read the article

  • Getting the first of a GROUP BY clause in SQL

    - by Michael Bleigh
    I'm trying to implement single-column regionalization for a Rails application and I'm running into some major headaches with a complex SQL need. For this system, a region can be represented by a country code (e.g. us) a continent code that is uppercase (e.g. NA) or it can be NULL indicating the "default" information. I need to group these items by some relevant information such as a foreign key (we'll call it external_id). Given a country and its continent, I need to be able to select only the most specific region available. So if records exist with the country code, I select them. If, not I want a records with the continent code. If not that, I want records with a NULL code so I can receive the default values. So far I've figured that I may be able to use a generated CASE statement to get an arbitrary sort order. Something like this: SELECT *, CASE region WHEN 'us' THEN 1 WHEN 'NA' THEN 2 ELSE 3 END AS region_sort FROM my_table WHERE region IN ('us','NA') OR region IS NULL GROUP BY external_id ORDER BY region_sort The problem is that without an aggregate function the actual data returned by the GROUP BY for a given row seems to be untameable. How can I massage this query to make it return only the first record of the region_sort ordered groups?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >