Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 350/398 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • SQL SERVER FOR XML SYNTAX

    - by Raj73
    How can I get an output as follows using FOR XML / sql query. I am not sure how I can get the Column Values as Elements instead of the tables' column Names. I am using sql server 2005 I HAVE TABLE SCEMA AS FOLLOWS CREATE TABLE PARENT ( PID INT, PNAME VARCHAR(20) ) CREATE TABLE CHILD ( PID INT, CID INT, CNAME VARCHAR(20) ) CREATE TABLE CHILDVALUE ( CID INT, CVALUE VARCHAR(20) ) INSERT INTO PARENT VALUES (1, 'SALES1') INSERT INTO PARENT VALUES (2, 'SALES2') INSERT INTO CHILD VALUES (1, 1, 'FOR01') INSERT INTO CHILD VALUES (1, 2, 'FOR02') INSERT INTO CHILD VALUES (2, 3, 'FOR03') INSERT INTO CHILD VALUES (2, 4, 'FOR04') INSERT INTO CHILDVALUE VALUES (1, '250000') INSERT INTO CHILDVALUE VALUES (2, '400000') INSERT INTO CHILDVALUE VALUES (3, '500000') INSERT INTO CHILDVALUE VALUES (4, '800000') The Output I am looking for is as follows <SALE1> <FOR01>250000</FOR01> <FOR02>400000</FOR02> </SALE1> <SALE2> <FOR03>500000</FOR03> <FOR04>800000</FOR04> </SALE2>

    Read the article

  • NSNotification vs. Delegate Protocols?

    - by jr
    I have an iPhone application which basically is getting information from an API (in XML, but maybe JSON eventually). The result objects are typically displayed in view controllers (tables mainly). Here is the architecture right now. I have NSOperation classes which fetch the different objects from the remote server. Each of these NSOperation classes, will take a custom delegate method which will fire back the resulting objects as they are parsed, and then finally a method when no more results are available. So, the protocol for the delegates will be something like: (void) ObjectTypeResult:(ObjectType *)result; (void) ObjectTypeNoMoreResults; I think the solution works well, but I do end up with a bunch of delegate protocols around and then my view controllers have to implement all these delegate methods. I don't think its that bad, but I'm always on the lookout for a better design. So, I'm thinking about using NSNotifications to remove the use of the delegates. I could include the object in the userInfo part of the notification and just post objects as received, and then a final event when no more are available. Then I could just have one method in each view controller to receive all the data, even when using multiple objects in one controller.† So, can someone share with me some pros/cons of each approach. Should I consider refactoring my code to use Events rather then the delegates? Is one better then the other in certain situations? In my scenario I'm really not looking to receive notifications in multiple places, so maybe the protocol based delegates are the way to go. Thanks!

    Read the article

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • EF4 POCO Not Updating Navigation Property On Save

    - by Gavin Draper
    I'm using EF4 with POCO objects the 2 tables are as follows Service ServiceID, Name, StatusID Status StatusID, Name The POCO objects look like this Service ServiceID, Status, Name Status StatusID, Name With Status on the Service object being a Navigation Property and of type Status. In my Service Repository I have a save method that takes a service objects attaches it to the context and calls save. This works fine for the service, but if the status for that service has been changed it does not get updated. My Save method looks like this public static void SaveService(Service service) { using (var ctx = Context.CreateContext()) { ctx.AttachModify("Services", service); ctx.AttachTo("Statuses",service.Status); ctx.SaveChanges(); } } The AttachModify method attaches an object to the context and sets it to modified it looks like this public void AttachModify(string entitySetName, object entity) { if (entity != null) { AttachTo(entitySetName, entity); SetModified(entity); } } public void SetModified(object entity) { ObjectStateManager.ChangeObjectState(entity, EntityState.Modified); } If I look at a SQL profile its not even including the navigation property in the update for the service table, it never touches the StatusID. Its driving me crazy. Any idea what I need to do to force the Navigation Property to update?

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • php and mysql site design question

    - by Jacksta
    I am trying to build a website with mysql and php. This is the first site I have attempted so I want to write a little plan and get some feedback. The site allows users to add some text in a text field as a “comment”. Once the comment has been entered into the site it is added to the database where it can be voted for by other users. When a new comment has been added to the database it needs to create a new page, e.g. www.xxxxx.com/commentname or www.xxxxxx.com/?id=99981. There will be a list of "Comments" in the database along with the number of votes for each comment. The home page will have two functions. 1) Allow users to add a "comment" 2) Display two tables, each with 20 rows containing most "popular comments" and "recent comments" Each comment will generate its one page where the comment will be displayed. Here users can read the comment and Vote for the comment if they wish. Please help me out by explaining how to do the following. -Generate a new page whenever a comment is added to the database -Add a vote to the vote count in the comment database. -Display the top 20 most popular comments as per number of votes.

    Read the article

  • Binding value for NSTableView, but tooltip gets set as well

    - by Mark
    I've set up an NSTableView in Interface Builder to be populated from an NSArray. Each value of the array represents one row in the table. The value is bound correctly, but as a side effect, the table cell's tooltip is set to the string representation of the bound object. In my case, the NSArray contains NSDictiorany objects and the tooltip looks like it could be the [... description] output of that dictionary. Very ugly... I don't want the tooltip to be set at all. I have other tables that have plain NSString values bound to them and they don't have a tooltip set automatically. Is there some Interface Builder magic going on? I tried to start with a blank project - same problem. I should add that the table cell is a custom implementation of NSTextFieldCell that uses an NSButtonCell instance to draw an image and a label into the table. The values are retrieved from the dictionary bound as value. Why is the tooltip set when I only bind the "value" attribute? Thanks in advance!

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Is .NET 4.0 just a show?

    - by Will Marcouiller
    I went to a presentation about the .NET Framework and Visual Studio 2010, last night. The topis were: ASP.NET 4 - Some of the new features of ASP.NET 4 More control over ClientID's in WebForms; Output Caching; ... // Some other stuff I don't really remember being more in framework and WinForms world. Entity Framework 2.0 (.NET 4.0) T4 Templates; Domain driven development; Data driven development; Contexts (edmx files); Some of real-world limitations of EF4 (projects with over 70 to 75 tables); Better POCO support, despite there are still these hidden EntityObject and StructuralObject, but used differently in comparison to EF 1.0 so that it doesn't take off your inheritance; Allows to easily choose how to persist the hierarchy into the underlying database; Code only (start working with EF4 directly from your code!); Design by Contract (DbC). The most interesting feature is, and only, as far as I'm concerned, all related to parallelism made easier. Which really works! No additional assembly references to add. In conclusion, I'm far from impressed about .NET Framework 4.0, apart that it makes some things easier to do. But when you're used to make it a way, it doesn't really change much, in my opinion. Is it me who cannot foresee what .NET 4.0 has to offer? What would you guys base your decision on to migrate to .NET 4.0, in a practical way?

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • nHibernate storage of an object with self referencing many children and many parents

    - by AdamC
    I have an object called MyItem that references children in the same item. How do I set up an nhibernate mapping file to store this item. public class MyItem { public virtual string Id {get;set;} public virtual string Name {get;set;} public virtual string Version {get;set;} public virtual IList<MyItem> Children {get;set;} } So roughly the hbm.xml would be: <class name="MyItem" table="tb_myitem"> <id name="Id" column="id" type="String" length="32"> <generator class="uuid.hex" /> </id> <property name="Name" column="name" /> <property name="Version" column="version" /> <bag name="Children" cascade="all-delete-orphan" lazy="false"> <key column="children_id" /> <one-to-many class="MyItem" not-found="ignore"/> </bag> </class> This wouldn't work I don't think. Perhaps I need to create another class, say MyItemChildren and use that as the Children member and then do the mapping in that class? This would mean having two tables. One table holds the MyItem and the other table holds references from my item. NOTE: A child item could have many parents.

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • MySQL move data from one table to another, matching ID's

    - by Reveller
    I have (a.o.) two MySQL tables with (a.o.) the following columns: tweets: ------------------------------------- id text from_user_id from_user ------------------------------------- 1 Cool tweet! 13295354 tradeny 2 Tweeeeeeeet 43232544 bolleke 3 Yet another 13295354 tradeny 4 Something.. 53546443 janusz4 users: ------------------------------------- id from_user num_tweets from_user_id ------------------------------------- 1 tradeny 2235 2 bolleke 432 3 janusz4 5354 I now want to normalize the tweets table, replacing tweets.from_user with an integer that matches users.id. Secondly, I want to fill in the corresponding users.from_user_id, Finally, I want to delete tweets.from_user_id so that the end result would look like: tweets: ------------------------ id text from_user ------------------------ 1 Cool tweet! 1 2 Tweeeeeeeet 2 3 Yet another 1 4 Something.. 3 users: ------------------------------------- id from_user num_tweets from_user_id ------------------------------------- 1 tradeny 2235 13295354 2 bolleke 432 43232544 3 janusz4 5354 53546443 My question is whether one could help me form the proper queries for this. I have only come so far: UPDATE tweets SET from_user = (SELECT id FROM users WHERE from_user = tweets.from_user) WHERE... UPDATE users SET from_user_id = (SELECT from_user_id FROM tweets WHERE from_user = tweets.from_user) WHERE... ALTER TABLE tweets DROP from_user_id Any help would be greatly appreciated :-)

    Read the article

  • UITableView cell appearance update not working - iPhone

    - by Jack Nutkins
    I have this piece of code which I'm using to set the alpha and accessibility of one of my tables cells dependent on a value stored in user defaults: - (void) viewDidAppear:(BOOL)animated{ [self reloadTableData]; } - (void) reloadTableData { if ([[userDefaults objectForKey:@"canDeleteReceipts"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteMileages"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteAll"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; NSLog(@"In Here"); } else { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } } The value stored in userDefaults is '0' so the cell in section 8 row 2 should be greyed out and disabled, however this doesn't happen and the cell is still selectable with an alpha of 1... The log statement 'In Here' is called so it seems to be executing the right code, but that code doesn't seem to be doing anything. Can anyone explain what I've done wrong? Thanks, Jack

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • CakePHP: Need help using saveField to update a fields in a belongsTo model

    - by afrisch
    I am trying to update a password into two different models/tables in CakePHP. I can update it fine in the parent model, but not the second model. Models: Users (hasOne GameProfile) PK=id Gameprofiles (belongsTo User) FK=user_id Here is a stripped down version of my function in the Users_controller.php: function updatepass() { if (!empty($this->data)) { $this->User->id = $this->Auth->user('id'); $this->User->saveField('sha1password', $this->Auth->password($this->data['User']['newpass'])); $this->User->Gameprofile->saveField('plainpassword', $this->data['User']['newpass']); } } When I execute the function, the users table is updated fine. But the gameprofile table is not updated, rather Cake does an insert. SQL Query Log: 1195 Query UPDATE `users` SET `sha1password` = 'e9443e9f5e1a07832aad1b2f84de1a666daf89b5' WHERE `users`.`id` = 30 1195 Query INSERT INTO `gameprofiles` (`plainpassword`) VALUES ('abc') Is there a way to get CakePHP to do an update using saveField on a model with a belongsTo attribute? I've tried various ways to refer to user_id before executing the second saveField, but just can't seem to find the winning combination. Any help is greatly appreciated!

    Read the article

  • how to create aro using dbAcl using console

    - by Praveen kalal
    hi i am using cakephp for my project but whille creating acl using command promt. when i run the following command cake schema run create DbAcl it genrate three tables in database. but after puting the following code in users_controller.php. and this command. cake acl view aro it dont create aros. function index() { $aro =& $this->Acl->Aro; //pr($aro); exit; //Here's all of our group info in an array we can iterate through $groups = array( 0 => array( 'alias' => 'admins' ), 1 => array( 'alias' => 'guests' ), 2 => array( 'alias' => 'mangers' ) ); //Iterate and create ARO groups foreach($groups as $data) { //Remember to call create() when saving in loops... $aro->create(); //Save data $aro->save($data); } }

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Database contents setting themselves to 0

    - by Luis Armando
    I have a Database that contains 4 tables, however I'm using 1 of them which is separated from the others. In this table I have 4 fields which are varchar and the rest are ints (11 other fields), when the users fill up the DB everything gets saved correctly, however it has happened 3 times so far that the database values for the int's reset to 0 without any apparent reason. At first, I thought, it was because those fields (where the numbers should go) were varchars not ints. However since I changed it, it happened again. I've already double checked my code and I have nothing that even updates or inserts a 0 value. Also I'm using codeigniter and active records which protect against SQL injections AND have XSS filtering enabled, could anyone point out something I might be missing or a reason for this to be happening? Also, I'm pretty sure about the answer of this but, is there ANY way to recover some data?? Other than having to ask everyone to fill in everything again.. =/ ** EDIT ** The Storage Engine is MyISAM and Collation is latin1_swedish_ci, Pack Keys are default, for all intents and purposes it's a normal DB

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >