Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 101/293 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Detect when a UITableView is being reordered

    - by Mike Weller
    I have a UITableView backed by an NSFetchedResultsController which may trigger updates at any time. If the user is currently reordering rows, applying these updates will cause an exception because the table view has temporarily taken over and you get an error like Invalid update: invalid number of rows in section [...] How can I detect when the user has started moving a cell so I can delay updates caused by the fetched results controller? There don't seem to be any table view delegate methods to detect this. This delegate method: - (NSIndexPath *)tableView:(UITableView *)tableView targetIndexPathForMoveFromRowAtIndexPath:(NSIndexPath *)sourceIndexPath toProposedIndexPath:(NSIndexPath *)proposedDestinationIndexPath { Doesn't get called when the user initially detaches the first cell, only when they actually move it somewhere else.

    Read the article

  • hibernate not throwing stale state exception nor it is overwriting data

    - by Reddy
    Our application do the following. 1. Start the transaction. 2. Execute a query using prepared statement 3. Check a condition to see the number of rows updated are equal to the required number. 4. It commits on success of above condition otherwise it will roll back However the problem is that when two threads are simultaneously enter this code. Thread-1 is updating a row in step 2. It checked the condition and committed successfully since the condition is successful. Thread-2 started execution somewhere between steps 1 & 4, and it is failing on at condition checking at step 3 (as it is getting number of updated rows as 0). I expected second thread to throw an exception but it is not. What could be the problem?

    Read the article

  • vba excel copy subtable from sheet to sheet

    - by user429400
    I realize that this is probably a duplicate, but I've been searching for an hour and I can't to get the syntax right. I have a sheet with several tables. There is at least one empty column and one empty row between one table to the other. I know the start row and start column of each table, and I know that each table has 3 columns. I don't know how many rows it has. I want to write a sub that receives: table start row table start column and copies the table into another sheet (let's say that the destination is sheet2 starting at A1). I know I can do it with a loop, but I suspect there is a better syntax right? (The main issue here is that I need to find the number of rows each table has) Thanks. Li

    Read the article

  • C# Entity Framework Base Repository

    - by Andy
    I'm trying to create a base repository for use with Entity Framework 4.0 and having some trouble. In this code below, why is it not possible to do this in one line? public IEnumerable<T> GetAll<T>(Expression<Func<T, bool>> filter) { IEnumerable<T> allCustomers = this.GetAll<T>(); IEnumerable<T> result = allCustomers.Where(filter.Compile()); return result; } Won't this result in 2 SQL statements: one without a where clause that retrieves all rows, and one with a where clause that only retrieves the rows that match the predicate? How can this be done with a single SQL statement? I can't get it to compile if I try to cast the filter.Compile() to Func<Customer, bool>. Thanks, Andy

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • MYSQL - How to increment fields in one row with values from another row

    - by Walker Boh
    I have a table that we'll call 'Sales' with 4 rows: uid, date, count and amount. I want to increment the count and amount values for one row with the count/amount values from a different row in that table. Example: UID | Date | Count | Amount| 1 | 2013-06-20 | 1 | 500 | 2 | 2013-06-24 | 2 | 1000 | Ideal results would be uid 2's count/amount values being incremented by uid 1's values: UID | Date | Count | Amount| 1 | 2013-06-20 | 1 | 500 | 2 | 2013-06-24 | 3 | 1500 | Please note that my company's database is an older version of MYSQL (3.something) so subqueries are not possible. I am curious to know if this is possible outside of doing an "update sales set count = count + 1" and likewise for the amount columns. I have a lot of rows to update and incrementing the values individually is quite time consuming if you can imagine. Thanks for any help or suggestions!

    Read the article

  • A question about indexes regarding to the gain of inserts & updates in database

    - by Mestika
    Hi, I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index. The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2). My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide. For Table1 it might not be necessary to have an index because the SELECT queries there is not that common. I would appreciate any suggestion, tips or just good advice to what is a good solution. By the way, I’m using IBM DB2 version 9.7 as my Database system Sincerely Mestika

    Read the article

  • CImg compile problems in Codegear 2009

    - by Seth
    I wish to use the CImg library for image processing in my current project. I am using Codegear C++ Builder 2009. I include CImg.h in the source file and put in the following code: int rows =5; int cols = 5; CImg<double> img(rows,cols); I get the following error: [BCC32 Error] CImg.h(39159): E2285 Could not find a match for 'CImg<unsigned char>::move_to<t>(const CImg<unsigned char>)' Does anyone know if there is a #define I should be using when building in Codegear C++ Builder 2009. Or is it simply not compatible?

    Read the article

  • SQL to get friends AND friends of friends of a user

    - by Enrique
    My MySQL tables structure is like this. USER int id varchar username FRIEND_LIST int user_id int friend_id For each friend relationship I insert 2 records in FRIEND_LIST. If user 1 is friend of user 2 then the next rows are inserted into FRIEND_LIST 1,2 2,1 I want to get the friends and friends of friends of an specific user. The select should return columns a, b, c. a: user_id b: friend_id c: username (username of friend_id ) If 1 is friend of 2 and 3. 2 is friend of 3, 4 and 5 3 is friend of 5,6,7 Then the query to get 1's friends and friends of friends should return: 1 2 two 1 3 three 2 1 one 2 3 three 2 4 four 2 5 five 3 1 one 3 5 five 3 6 six 3 7 seven Can I get this rows with a single query?

    Read the article

  • speed up the speed of a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • WxPython multiple grid instances

    - by randomPythonHacker
    Does anybody know how I can get multiple instances of the same grid to display on one frame? Whenever I create more than 1 instance of the same object, the display of the original grid widget completely collapses and I'm left unable to do anything with it. For reference, here's the code: import wx import wx.grid as gridlib class levelGrid(gridlib.Grid): def __init__(self, parent, rows, columns): gridlib.Grid.__init__(self, parent, -1) self.moveTo = None self.CreateGrid(rows, columns) self.SetDefaultColSize(32) self.SetDefaultRowSize(32) self.SetColLabelSize(0) self.SetRowLabelSize(0) self.SetDefaultCellBackgroundColour(wx.BLACK) self.EnableDragGridSize(False) class mainFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, size=(768, 576)) editor = levelGrid(self, 25, 25) panel1 = wx.Panel(editor, -1) #vbox = wx.BoxSizer(wx.VERTICAL) #vbox.Add(editor, 1, wx.EXPAND | wx.ALL, 5) #selector = levelGrid(self, 1, 25) #vbox.Add(selector, 1, wx.EXPAND |wx.BOTTOM, 5) self.Centre() self.Show(True) app = wx.App() mainFrame(None, -1, "SLAE") app.MainLoop()

    Read the article

  • How to order results based on number of search term matches?

    - by Travis
    I am using the following tables in mysql to describe records that can have multiple searchtags associated with them: TABLE records ID title desc TABLE searchTags ID name TABLE recordSearchTags recordID searchTagID To SELECT records based on arbitrary search input, I have a statement that looks sort of like this: SELECT recordID FROM recordSearchTags LEFT JOIN searchTags ON recordSearchTags.searchTagID = searchTags.ID WHERE searchTags.name LIKE CONCAT('%','$search1','%') OR searchTags.name LIKE CONCAT('%','$search2','%') OR searchTags.name LIKE CONCAT('%','$search3','%') OR searchTags.name LIKE CONCAT('%','$search4','%'); I'd like to ORDER this resultset, so that rows that match with more search terms are displayed in front of rows that match with fewer search terms. For example, if a row matches all 4 search terms, it will be top of the list. A row that matches only 2 search terms will be somewhere in the middle. And a row that matches just one search term will be at the end. Any suggestions on what is the best way to do this? Thanks!

    Read the article

  • MySQL InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following MySQL InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • to get columns from Excel files using Apache POI??

    - by posdef
    Hi, In order to do some statistical analysis I need to extract values in a column of an Excel sheet. I have been using the Apache POI package to read from Excel files, and it works fine when one needs to iterate over rows. However I couldn't find anything about getting columns neither in the API (link text) nor through google searching. As I need to get max and min values of different columns and generate random numbers using these values, so without picking up individual columns, the only other option is to iterate over rows and columns to get the values and compare one by one, which doesn't sound all that time-efficient. Any ideas on how to tackle this problem? Thanks,

    Read the article

  • SELECT subset from two tables and LEFT JOIN results

    - by Doctor Trout
    Hi all, I'm trying to write a bit of SQL for SQLITE that will take a subset from two tables (TableA and TableB) and then perform a LEFT JOIN. This is what I've tried, but this produces the wrong result: Select * from TableA Left Join TableB using(key) where TableA.key2 = "xxxx" AND TableB.key3 = "yyyy" This ignore cases where key2="xxxx" but key3 != "yyyy". I want all the rows from TableA that match my criteria whether or not their corresponding value in TableB matches, but only those rows from TableB that match both conditions. I did manage to solve this by using a VIEW, but I'm sure there must be a better way of doing this. It's just beginning to drive me insane tryng to solve it now. (Thanks for any help, hope I've explained this well enough).

    Read the article

  • How to remove htmlentities() values from the database?

    - by Chris
    Long before I knew anything - not that I know much even now - I desgined a web app in php which inserted data in my mysql database after running the values through htmlentities(). I eventually came to my senses and removed this step and stuck it in the output rather than input and went on my merry way. However I've since had to revisit some of this old data and unfortunately I have an issue, when it's displayed on the screen I'm getting values displayed which are effectively htmlentitied twice. So, is there a mysql or phpmyadmin way of changing all the older, affected rows back into their relevant characters or will I have to write a script to read each row, decode and update all 17 million rows in 12 tables?

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • Echoing a pseudo column value after a COUNT

    - by rob - not a robber
    Hi Gang... Please don't beat me if this is elementary. I searched and found disjointed stuff relating to pseudo columns. Nothing spot on about what I need. Anyway... I have a table with some rows. Each record has a unique ID, an ID that relates to another entity and finally a comment that relates to that last entity. So, I want to COUNT these rows to basically find what entity has the most comments. Instead of me explaining the query, I'll print it SELECT entity_id, COUNT(*) AS amount FROM comments GROUP BY entity_id ORDER BY amount DESC The query does just what I want, but I want to echo the values from that pseudo column, 'amount' Can it be done, or should I use another method like mysql_num_rows? Thank you!!!

    Read the article

  • Can I do this with just SQL?

    - by Josh
    At the moment I have two tables, products and options. Products contains id title description Options contains id product_id sku title Sample data may be: Products id: 1 title: 'test' description: 'my description' Options id: 1 product_id: 1 sku: 1001 title: 'red' id: 2 product_id: 1 sku: 1002 title: 'blue' I need to display each item, with each different option. At the moment, I select the rows in products and iterate through them, and for each one select the appropriate rows from options. I then create an array, similar to: [product_title] = 'test'; [description] = 'my description'; [options][] = 1, 1001, 'red'; [options][] = 2, 1002, 'blue'; Is there a better way to do this with just sql (I'm using codeigniter, and would ideally like to use the Active Record class)?

    Read the article

  • Sql Server Where Case Then Is Null Else Is Not Null

    - by Fabio Montezuma
    I have a procedure which receive a bit variable called @FL_FINALIZADA. If it is null or false I want to restrict my select to show only the rows that contain null DT_FINALIZACAO values. Otherwise I want to show the rows containing not null DT_FINALIZACAO values. Something like this: SELECT * FROM ... WHERE ... AND (OPE.DT_FINALIZACAO = CASE WHEN (@FL_FINALIZADA <> 1) THEN NULL END OR OPE.DT_FINALIZACAO IS NOT NULL) In this case I receive the message: None of the result expressions in a CASE specification can be NULL. How can I achieve this? Thanks in advance.

    Read the article

  • How do I locate instances of <CR><LF><LF> in a mysql longtext field

    - by Ilane
    I would like to query my table for how many rows contain one or more instances of <CR><LF><LF>. I can't figure out the correct syntax. I would try LIKE '%<CR><LF><LF>%', but I don't know how to specify these special characters. I did try where mydata REGEXP '%[.CR.][.LF.][.LF.]%', and that didn't get a syntax error but neither did it return any rows. So, I realized I need a way to insert the test data as well! Note: I am using mysql 5.0.

    Read the article

  • EXPLAIN PLAN FOR in ORACLE

    - by Adnan
    I am making a test. I have all tests in rows, so my rows looks like this; ID | TEST ---------------------------------- 1 | 'select sysdate from dual' 2 | 'select sysdatesss from dual' Now I read it row by row and I need to test it with EXPLAIN PLAN FOR so the for the first row it would be EXPLAIN PLAN FOR select sysdate from dual but I have problem converting the TEST field. Right now I use; EXPLAIN PLAN FOR testing.TEST but it does not work. Any ideas?

    Read the article

  • MySQL Delete from 1 table, using multiple tables

    - by nute
    I would like to delete all the rows found by that query: SELECT cart_abandon.* FROM cart_abandon, cart_product, txn_product, users WHERE cart_abandon.cartid = cart_product.cartid AND cart_product.productid = txn_product.productid AND txn_product.username = users.username AND users.id = cart_abandon.userid AND txn_product.txndate >= cart_abandon.abandondate The thing to keep in mind is that the query here uses 4 different tables, however I only want to delete rows from 1 table (cart_abandon). Is there an easy way to do that? Maybe this: ? DELETE cart_abandon FROM cart_abandon, cart_product, txn_product, users WHERE cart_abandon.cartid = cart_product.cartid AND cart_product.productid = txn_product.productid AND txn_product.username = users.username AND users.id = cart_abandon.userid AND txn_product.txndate >= cart_abandon.abandondate Is that valid? Correct?

    Read the article

  • Reducing a normalized table to one value

    - by Dio
    Hello, I'm sure this has been asked but I'm not quite sure how to properly search for this question, my apologies. I have two tables, Foo and Bar. For has one row per Food, bar has many rows per food matching descriptors. Foo name id Apple 1 Orange 2 Bar id description 1 Tasty 1 Ripe 2 Sweet etc (sorry for the somewhat contrived example). I'm trying to return a query where if, for each row in Foo, Bar contains a descriptor in ('Tasty', 'Juicy') return true ex: Output Apple True Orange False I had been solving this somewhat trivially with a case when I only had one item to match select Foo.name, case bar.description when 'Tasty' then True else 'False' end from Foo left join Bar on foo.id = bar.id where bar.description = 'Tasty' But with multiple items, I keep ending up with extra rows: Output Apple True Apple False etc etc Can someone point me in the right direction on how to think about this problem or what I should be doing? Thank you.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >