Search Results

Search found 30398 results on 1216 pages for 'embedded sql'.

Page 695/1216 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • how to find end of quarter given a date in the quarter

    - by Ramy
    If i'm given a date (say @d = '11-25-2010'), how can I determine the end of the quarter from that date. I'd like to use a timestamp one second before midnight. I can get this: select dateadd(qq, datediff(qq, 0, getdate()), 0) as quarterStart which gives me: '10-1-2010' and I use this for one second before midnight of a given day: select DateAdd(second, -1, DateAdd(day, DateDiff(day, 0, @d))+1, 0) ) AS DayEnd in the end, a quarterEnd method would give me '12-31-2010 23:59:00'

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • how to revert non-corrupted mssql database with stopat

    - by Yisman
    i have a good, working valid non-corrupted database in mssql that i want to revert to a point in time how is that done? the standard RESTORE command requires a full backup as a starting point, and then log backups thereafter. i cant understand why this must be done from a backup. if my db is good and the logs are OK, why cant i just revert with a STOPAT from the live logs in the db? one dba suggested that whenever i want to restore i should THEN make a log backup and then RESTORE with STOPAT. i believe it would work but sounds a little backwards any better ideas? thank you very much

    Read the article

  • Help converting subquery to query with joins

    - by Tim
    I'm stuck on a query with a join. The client's site is running mysql4, so a subquery isn't an option. My attempts to rewrite using a join aren't going too well. I need to select all of the contractors listed in the contractors table who are not in the contractors2label table with a given label ID & county ID. Yet, they might be listed in contractors2label with other label and county IDs. Table: contractors cID (primary, autonumber) company (varchar) ...etc... Table: contractors2label cID labelID countyID psID This query with a subquery works: SELECT company, contractors.cID FROM contractors WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors.cID NOT IN ( SELECT contractors2label.cID FROM contractors2label WHERE labelID <> 1 AND countyID <> 1 ) I thought this query with a join would be the equivalent, but it returns no results. A manual scan of the data shows I should get 34 rows, which is what the subquery above returns. SELECT company, contractors.cID FROM contractors LEFT OUTER JOIN contractors2label ON contractors.cID = contractors2label.cID WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors2label.labelID <> 1 AND contractors2label.countyID <> 1 AND contractors2label.cID IS NULL

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • php while loop throwing a error over a $var < 10 statement

    - by William
    Okay, so for some reason this is giving me a error as seen here: http://prime.programming-designs.com/test_forum/viewboard.php?board=0 However if I take away the '&& $cur_row < 10' it works fine. Why is the '&& $cur_row < 10' causing me a problem? $sql_result = mysql_query("SELECT post, name, trip, Thread FROM (SELECT MIN(ID) AS min_id, MAX(ID) AS max_id, MAX(Date) AS max_date FROM test_posts GROUP BY Thread ) t_min_max INNER JOIN test_posts ON test_posts.ID = t_min_max.min_id WHERE Board=".$board." ORDER BY max_date DESC", $db); $num_rows = mysql_num_rows($sql_result); $cur_row = 0; while($row = mysql_fetch_row($sql_result) && $cur_row < 10) { $sql_max_post_query = mysql_query("SELECT ID FROM test_posts WHERE Thread=".$row[3].""); $post_num = mysql_num_rows($sql_max_post_query); $post_num--; $cur_row++; echo''.$cur_row.'<br/>'; echo'<div class="postbox"><h4>'.$row[1].'['.$row[2].']</h4><hr />' .$row[0]. '<br /><hr />[<a href="http://prime.programming-designs.com/test_forum/viewthread.php?thread='.$row[3].'">Reply</a>] '.$post_num.' posts omitted.</div>'; }

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Is a primary key automatically an index?

    - by Lieven Cardoen
    If I run Profiler, then it suggests a lot of indexes like this one CREATE CLUSTERED INDEX [_dta_index_Users_c_9_292912115__K1] ON [dbo].[Users] ( [UserId] ASC )WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] UserId is the primary key of the table Users. Is this index better than the one already in the table: ALTER TABLE [dbo].[Users] ADD CONSTRAINT [PK_Users] PRIMARY KEY NONCLUSTERED ( [UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    Read the article

  • C# - Password Database

    - by user335932
    So I want to make a program that allows you to store and search for user names/passwords for online sites there signed up to. I know C# has some database options but I don't know much about it. I also heard that it can read/write excel files. Whats do you think is best for storing the data? ALSO do databases need to be stored online on a sever, or can they reside in the program files?

    Read the article

  • How to create unique user key

    - by Grayson Mitchell
    Scenario: I have a fairly generic table (Data), that has an identity column. The data in this table is grouped (lets say by city). The users need an identifier in order for printing on paper forms, etc. The users can only access their cites data, so if they use the identity column for this purpose they will see odd numbers (e.g. a 'New York' user might see 1,37,2028... as the listed keys. Idealy they would see 1,2,3... (or something similar) The problem of course is concurrency, this being a web application you can't just have something like: UserId = Select Count(*)+1 from Data Where City='New York' Has anyone come up with any cunning ways around this problem?

    Read the article

  • Transaction within IF THEN ELSE doesn't commit

    - by boris callens
    In my TSQL script I have an IF THEN ELSE structure that checks if a column already exists. If not it creates the column and updates it. IF NOT EXISTS( SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'tableName' AND COLUMN_NAME = 'columnName')) BEGIN BEGIN TRANSACTION ALTER TABLE tableName ADD columnName int NULL COMMIT BEGIN TRANSACTION update tableName set columnName = [something] from [subquery] COMMIT END This doesn't work because the column doesn't exist after the commit. Why doesn't the COMMIT commit?

    Read the article

  • A SELECT statement for Mysql

    - by Hossein
    I have this table: id,bookmarkID,tagID I want to fetch the top N bookmarkIDs for a given list of tags. Does anyone know a very fast solution for this? the table is quite large(12 million records) I am using MySql

    Read the article

  • Count total number of callers?

    - by Kristopher Ives
    I'm currently doing this query to find the guy who makes the most calls: SELECT `commenter_name`, COUNT(*) AS `calls` FROM `comments` GROUP BY `commenter_name` ORDER BY `calls` LIMIT 1 What I want now is to be able to find out how many total unique callers. I tried using DISTINCT but I didn't get anywhere.

    Read the article

  • mysql join 3 tables and count

    - by air
    Please look at this image here is 3 tables , and out i want is uid from table1 industry from table 3 of same uid count of fid from table 2 of same uid like in the sample example output will be 2 records Thanks

    Read the article

  • Monitoring Log Shipped Databases

    - by Registered User
    I need a consistent way to monitor databases that are read-only log shipped copies of production databases. In the past I have relied on the following methods: Set the job that restores logs to the database kick off another job as its last step. Set the job that restores logs to the database to insert a record in a control table as its last step. Query the msdb database to check the status of the job that restores logs to the database. Query a control table inside the database itself that gets a value immediately before transaction logs are backed up. Query MAX values from tables inside the database to see if it has recent changes. Although the above methods work, they can't be implemented for every log shipped database that I query for various reasons. What is the best method for monitoring the "data as of" date for a log shipped database?

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • MySQL Query Where Column Like Column

    - by shmeeps
    I'm working on a small project that involves grabbing a list of contacts which are stored for each group. Essentially, the database is set up so that each group has a primary and secondary contact stored as, unsurprisingly, Group.Primary and Group.Secondary. The objective is to pull every Primary and Secondary contact for each Group and display them in a sortable table. I have the sortable table all worked out, but I have come across a small problem. Each primary and secondary field can have more than one contact separated by a comma. For instance, if Primary contained 123,256 , it would need to pull both Contacts with IDs 123 and 256. I had intended to use a query formatted like this: SELECT * FROM Group G, Contacts C WHERE G.Primary LIKE %C.ID% OR G.Secondary LIKE %C.ID% so that I could just skip the comma part, but I can't seem to find a working query for this. My question to you is, am I just overlooking something here? Is there a simple query that would let me do this? Or am I better off getting the groups and contacts separately, and combine the two later. I think the former is a little easier to understand when read, which is a plus as this is a shared project, but if that is not possible I will do the latter. This code is simplified, but it gets the point across.

    Read the article

  • How can I get all children from a parent row in the same table?

    - by Johnny Freeman
    Let's say I have a table called my_table that looks like this: id | name | parent_id 1 | Row 1 | NULL 2 | Row 2 | NULL 3 | Row 3 | 1 4 | Row 4 | 1 5 | Row 5 | NULL 6 | Row 6 | NULL 7 | Row 7 | 8 8 | Row 8 | NULL 9 | Row 9 | 4 10 | Row 10 | 4 Basically I want my final array in PHP to look like this: Array ( [0] => Array ( [name] => Row 1 [children] => Array ( [0] => Array ( [name] => Row 3 [children] => ) [1] => Array ( [name] => Row 4 [children] => Array ( [0] => Array ( [name] => Row 9 [children] => ) [1] => Array ( [name] => Row 10 [children] => ) ) ) ) ) [1] => Array ( [name] => Row 2 [children] => ) [2] => Array ( [name] => Row 5 [children] => ) [3] => Array ( [name] => Row 6 [children] => ) [4] => Array ( [name] => Row 8 [children] => Array ( [0] => Array ( [name] => Row 7 [children] => ) ) ) ) So, I want it to get all of the rows where parent_id is null, then find all nested children recursively. Now here's the part that I'm having trouble with: How can this be done with 1 call to the database? I'm sure I could do it with a simple select statement and then have PHP make the array look like this but I'm hoping this can be done with some kind of fancy db joining or something like that. Any takers?

    Read the article

  • How to apply a update after an inser or update POSTGRESQL Trigger

    - by user3718906
    How to apply an update after an insert or update in POSTGRESQL; I have got a table which has a field lastupdate; I want that field to be set up whenever the row is updated or when it was inserted. I tried this trigger, but It is not working! HELP!! CREATE OR REPLACE FUNCTION fn_update_profile() RETURNS TRIGGER AS $update_profile$ BEGIN IF (TG_OP = 'INSERT' OR TG_OP = 'UPDATE' ) THEN UPDATE profile SET lastupdate=now() where oid=OLD.oid; RETURN NULL; ELSEIF (TG_OP = 'DELETE') THEN RETURN NULL; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $update_profile$ LANGUAGE plpgsql;

    Read the article

  • Linq2SQL or EntityFramework and databinding

    - by rene marxis
    is there some way to do databinding with linq2SQL or EntityFramework using "typed links" to the bound property? Public Class Form1 Dim db As New MESDBEntities 'datacontext/ObjectContext Dim bs As New BindingSource Private Sub Form1_Load(sender As System.Object, e As System.EventArgs) Handles MyBase.Load bs.DataSource = (From m In db.PROB_GROUP Select m) grid.DataSource = bs TextBox1.DataBindings.Add("Text", bs, "PGR_NAME") TextBox1.DataBindings.Add("Text", bs, db.PROB_GROUP) '**<--- Somthing like this** End Sub End Class I'd like to have type checking when compiling and the model changed.

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >