Search Results

Search found 27339 results on 1094 pages for 'sql dmv'.

Page 655/1094 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • I want to do a sql update loop statement, by using the do--while in php

    - by Jean
    Hello, I want to loop the update statement, but it only loops once. Here is the code I am using: do { mysql_select_db($database_ll, $ll); $query_query= "update table set ex='$71[1]' where field='val'"; $query = mysql_query($query_query, $ll) or die(mysql_error()); $row_domain_all = mysql_fetch_assoc($query); } while ($row_query = mysql_fetch_assoc($query)); Thanks Jean

    Read the article

  • How can I use an array within a SQL query

    - by ThinkingInBits
    So I'm trying to take a search string (could be any number of words) and turn each value into a list to use in the following IN statement) in addition, I need a count of all these values to use with my having count filter $search_array = explode(" ",$this->search_string); $tag_count = count($search_array); $db = Connect::connect(); $query = "select p.id from photographs p left join photograph_tags c on p.id = c.photograph_id and c.value IN ($search_array) group by p.id having count(c.value) >= $tag_count"; This currently returns no results, any ideas?

    Read the article

  • mysql: inserting data and autoincrement

    - by every_answer_gets_a_point
    i am converting from access to mysql i have a table in access where one of the columns is an autonumber when i transfer the data into the mysql database (where i also have a column that is auto_increment), should i be transfering the auto_increment data into the auto_increment column, or will it auto_increment itself? how do i ensure that if i do not transfer the autoincrement data from access, that it auto_increments properly?

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • how to do multi insert and obtain ids

    - by liysd
    hi, I want to insert some data into a table (id PK autoincrement, val) with use multi insert INSERT INTO tab (val) VALUES (1), (2), (3) Is it possible to obtain a table of last inserted ids? I'm asking becouse I'm not sure if all will in this form: (n, n+1, n+2). I use mysql inodb.

    Read the article

  • how do I integrate the aspnet_users table (asp.net membership) into my existing database

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • how can add an extra select in this query?

    - by BulgedSnowy
    i've three tables related. images: id | filename | filesize | ... nodes: image_id | tag_id tags: id | name And i'm using this query to search images containing x tags SELECT images.* FROM images INNER JOIN nodes ON images.id = nodes.image_id WHERE tag_id IN (SELECT tags.id FROM tags WHERE tags.tag IN ("tag1","tag2")) GROUP BY images.id HAVING COUNT(*)= 2 The problem is that i need to retrieve also all images contained by the retrieved image, and i need this in the same query. This the actual query wich search retrieve all tags contained by the image: SELECT tag FROM nodes JOIN tags ON nodes.tag_id = tags.id WHERE image_id = images.id and nodes.private = images.private ORDER BY tag How can i mix this two to have only one query?

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • MySQL "NULL" questions

    - by Camran
    I have a table with several columns. Sometimes some of these column fields may be empty (ie. I won't use them in some cases). My questions: Would it be smart to set them to NULL in phpmyadmin? What does the "NULL" property actually do? Would I gain anything at all by setting them to NULL? Is it possible to use a NULL field the same way even though it is set to null?

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • join codition in sqlserver

    - by Pallavi
    after applying join condition on two tables i want records which is maximum among records of left table my query SELECT a1.*, t.*, ( a1.trnratefrom - t.trnratefrom )AS minrate, ( a1.trnrateto - t.trnrateto ) AS maxrate FROM (SELECT a.srno, trndate, b.trnsrno, Upper(Rtrim(Ltrim(b.trnstate))) AS trnstate, Upper(Rtrim(Ltrim(b.trnarea))) AS trnarea, Upper(Rtrim(Ltrim(b.trnquality))) AS trnquality, Upper(Rtrim(Ltrim(b.trnlength))) AS trnlength, Upper(Rtrim(Ltrim(b.trnunit))) AS trnunit, b.trnratefrom, b.trnrateto, a.remark, entdate FROM mstprodrates a INNER JOIN trnprodrates b ON a.srno = b.srno)a1 INNER JOIN (SELECT c.srno, trndate, d.trnsrno, Upper(Rtrim(Ltrim(d.trnstate))) AS trnstate, Upper(Rtrim(Ltrim(d.trnarea))) AS trnarea, Upper(Rtrim(Ltrim(d.trnquality))) AS trnquality, Upper(Rtrim(Ltrim(d.trnlength))) AS trnlength, Upper(Rtrim(Ltrim(d.trnunit))) AS trnunit, d.trnratefrom, d.trnrateto, c.remark, entdate FROM mstprodrates c INNER JOIN trnprodrates d ON c.srno = d.srno) AS t ON a1.trnstate = t.trnstate AND a1.trnquality = t.trnquality AND a1.trnunit = t.trnunit AND a1.trnlength = t.trnlength AND a1.trnarea = t.trnarea AND a1.remark = t.remark WHERE t.srno = (SELECT MAX(srno) FROM a1 WHERE srno < a1.srno)

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • How do you write a recursive stored procedure

    - by Grayson Mitchell
    I simply want a stored procedure that calculates a unique id and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the sp to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this sp also. ALTER PROCEDURE [dbo].[DataContainer_Insert] @SomeData varchar(max), @DataContainerId int out AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; BEGIN TRY SELECT @UserId = MAX(UserId) From DataContainer INSERT INTO DataContainer (UserId, SomeData) VALUES (@UserId, SomeData) SELECT @DataContainerId = scope_identity() END TRY BEGIN CATCH --try again exec DataContainer_Insert @DataContainerId, @SomeData END CATCH END

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >