Search Results

Search found 27905 results on 1117 pages for 'sql authority'.

Page 642/1117 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • How To Join Tables from Two Different Contexts with LINQ2SQL?

    - by RSolberg
    I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL? Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry. Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream. Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product. This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context? var query = (from c in contextA.Cases where monitoredInboxList.Contains(c.INBOXES.inboxName) select new { //setup fields here... });

    Read the article

  • Removal of table primary key in MySQL

    - by marionmaiden
    Hello, I've removed the primary key of one table of my MySQL database, but now, when I use the MySQL Administrator and try to edit some data of this table, it doesn't allow me to do this. The button edit that appears in the bottom of the table keeps visible, but disabled to click.

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • Converting time/date from XML file into MYSQL format

    - by IconicDigital
    One of the affiliate networks provides a feed with the following time/date format. <startDate>1349992800000</startDate> <endDate>1355266799999</endDate> My problem is I am trying to convert this to a MYSQL format, I have tried mktime and strtotime with no luck the date seems to come out wrong. I know this is the time since the Equinox, I am just not sure how to convert this to a MYSQL format.

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • Question about Transact SQL syntax

    - by Yousui
    Hi guys, The following code works like a charm: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; DECLARE @ErrorMessage NVARCHAR(4000), @ErrorSeverity int; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(); RAISERROR(@ErrorMessage, @ErrorSeverity, 1); END CATCH But this code gives an error: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), 1); END CATCH Why?

    Read the article

  • Best way to model Customer <--> Address

    - by Jen
    Every Customer has a physical address and an optional mailing address. What is your preferred way to model this? Option 1. Customer has foreign key to Address Customer (id, phys_address_id, mail_address_id) Address (id, street, city, etc.) Option 2. Customer has one-to-many relationship to Address, which contains a field to describe the address type Customer (id) Address (id, customer_id, address_type, street, city, etc.) Option 3. Address information is de-normalized and stored in Customer Customer (id, phys_street, phys_city, etc. mail_street, mail_city, etc.) One of my overriding goals is to simplify the object-relational mappings, so I'm leaning towards the first approach. What are your thoughts?

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • sqlite3 date operations when joining two tables in a view?

    - by duncan
    In short, how to add minutes to a datetime from an integer located in another table, in one select statement, by joining them? I have a table P(int id, ..., int minutes) and a table S(int id, int p_id, datetime start) I want to generate a view that gives me PS(S.id, P.id, S.start + P.minutes) by joining S.p_id=P.id The problem is, if I was generating the query from the application, I can do stuff like: select datetime('2010-04-21 14:00', '+20 minutes'); 2010-04-21 14:20:00 By creating the string '+20 minutes' in the application and then passing it to sqlite. However I can't find a way to create this string in the select itself: select p.*,datetime(s.start_at, formatstring('+%s minutes', p.minutes)) from p,s where s.p_id=p.id; Because sqlite as far the documentation tells, does not provide any string format function, nor can I see any alternative way of expressing the date modifiers.

    Read the article

  • HQL 'parsename' equivalent

    - by jaume
    I've discovered PARSENAME function as a good choice to order IP address stored in Database. Here there is an example. My issue is I'm using Hibernate with named queries in a xml mapping file and I am trying to avoid the use of session.createSQLQuery(..) function. I'm wondering if exists any PARSENAME equivalent function for HQL queries. I'm searching for it and cannot find anything. Many thanks.

    Read the article

  • how to combine the related version in group by

    - by randeepsp
    select count(a),b,c from APPLE join MANGO on (APPLE.link=MANGO.link) join ORANGE on (APPLE.link=ORANGE.link) where id='camel' group by b,c; the column b gives values like 1.0 1.0,R 1.0,B 2.0 2.0,B 2.0,R 3.0,C 3.0,R is there a way to modify the above quer so that all 1.0 and 1.0,R and 1.0,B are merged as 1.0 and 2.0,2.0,B are merged as 2.0 and same way for 3.0 and 4.0

    Read the article

  • Why is SQLite3 using covering indices instead of the indices I created?

    - by Geoff
    I have an extremely large database (contacts has ~3 billion entries, people has ~280 million entries, and the other tables have a negligible number of entries). Most other queries I've run are really fast. However, I've encountered a more complicated query that's really slow. I'm wondering if there's any way to speed this up. First of all, here is my schema: CREATE TABLE activities (id INTEGER PRIMARY KEY, name TEXT NOT NULL); CREATE TABLE contacts ( id INTEGER PRIMARY KEY, person1_id INTEGER NOT NULL, person2_id INTEGER NOT NULL, duration REAL NOT NULL, -- hours activity_id INTEGER NOT NULL -- FOREIGN_KEY(person1_id) REFERENCES people(id), -- FOREIGN_KEY(person2_id) REFERENCES people(id) ); CREATE TABLE people ( id INTEGER PRIMARY KEY, state_id INTEGER NOT NULL, county_id INTEGER NOT NULL, age INTEGER NOT NULL, gender TEXT NOT NULL, -- M or F income INTEGER NOT NULL -- FOREIGN_KEY(state_id) REFERENCES states(id) ); CREATE TABLE states ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, abbreviation TEXT NOT NULL ); CREATE INDEX activities_name_index on activities(name); CREATE INDEX contacts_activity_id_index on contacts(activity_id); CREATE INDEX contacts_duration_index on contacts(duration); CREATE INDEX contacts_person1_id_index on contacts(person1_id); CREATE INDEX contacts_person2_id_index on contacts(person2_id); CREATE INDEX people_age_index on people(age); CREATE INDEX people_county_id_index on people(county_id); CREATE INDEX people_gender_index on people(gender); CREATE INDEX people_income_index on people(income); CREATE INDEX people_state_id_index on people(state_id); CREATE INDEX states_abbreviation_index on states(abbreviation); CREATE INDEX states_name_index on states(name); Note that I've created an index on every column in the database. I don't care about the size of the database; speed is all I care about. Here's an example of a query that, as expected, runs almost instantly: SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; Here's the troublesome query: SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); Now, what I think would happen is that each subquery would use each relevant index I've created to speed this up. However, when I show the query plan, I see this: sqlite> EXPLAIN QUERY PLAN SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); 0|0|0|SEARCH TABLE contacts USING INTEGER PRIMARY KEY (rowid=?) (~25 rows) 0|0|0|EXECUTE LIST SUBQUERY 1 2|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 2|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 2|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person1_id_index (person1_id=?) (~12 rows) 3|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 3|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 3|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person2_id_index (person2_id=?) (~12 rows) 1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (INTERSECT) In fact, if I show the query plan for the first query I posted, I get this: sqlite> EXPLAIN QUERY PLAN SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; 0|0|1|SEARCH TABLE states USING COVERING INDEX states_abbreviation_index (abbreviation=?) (~1 rows) 0|1|0|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) Why is SQLite using covering indices instead of the indices I created? Shouldn't the search in the people table be able to happen in log(n) time given state_id which in turn is found in log(n) time?

    Read the article

  • Stored Procedure: Reducing Table Data

    - by SumGuy
    Hi Guys, A simple question about Stored Procedures. I have one stored procedure collecting a whole bunch of data in a table. I then call this procedure from within another stored procedure. I can copy the data into a new table created in the calling procedure but as far as I can see the tables have to be identical. Is this right? Or is there a way to insert only the data I want? For example.... I have one procedure which returns this: SELECT @batch as Batch, @Count as Qty, pd.Location, cast(pd.GL as decimal(10,3)) as [Length], cast(pd.GW as decimal(10,3)) as Width, cast(pd.GT as decimal(10,3)) as Thickness FROM propertydata pd GROUP BY pd.Location, pd.GL, pd.GW, pd.GT I then call this procedure but only want the following data: DECLARE @BatchTable TABLE ( Batch varchar(50), [Length] decimal(10,3), Width decimal(10,3), Thickness decimal(10,3), ) INSERT @BatchTable (Batch, [Length], Width, Thickness) EXEC dbo.batch_drawings_NEW @batch So in the second command I don't want the Qty and Location values. However the code above keeps returning the error: "Insert Error: Column name or number of supplied values does not match table"

    Read the article

  • Select *, max(date) works in phpMyAdmin but not in my code

    - by kdobrev
    OK, my statement executes well in phpMyAdmin, but not how I expect it in my php page. This is my statement: SELECT egid , group_name , limit , MAX( date ) FROM employee_groups GROUP BY egid ORDER BY egid DESC ; This is may table: CREATE TABLE employee_groups ( egid int(10) unsigned NOT NULL, date date NOT NULL, group_name varchar(50) NOT NULL, limit smallint(5) unsigned NOT NULL, PRIMARY KEY (egid,date) ) ENGINE=MyISAM DEFAULT CHARSET=cp1251; I want to extract the most recent list of groups, e.g. if a group has been changed I want to have only the last change. And I need it as a list (all groups).

    Read the article

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • how to write this query using joins?

    - by aquero
    Hi, i have a table campaign which has details of campaign mails sent. campaign_table: campaign_id campaign_name flag 1 test1 1 2 test2 1 3 test3 0 another table campaign activity which has details of campaign activities. campaign_activity: campaign_id is_clicked is_opened 1 0 1 1 1 0 2 0 1 2 1 0 I want to get all campaigns with flag value 3 and the number of is_clicked columns with value 1 and number of columns with is_opened value 1 in a single query. ie. campaign_id campaign_name numberofclicks numberofopens 1 test1 1 1 2 test2 1 1 I did this using sub-query with the query: select c.campaign_id,c.campaign_name, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofclicks, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofopens FROM campaign c WHERE c.flag=1 But people say that using sub-queries are not a good coding convention and you have to use join instead of sub-queries. But i don't know how to get the same result using join. I consulted with some of my colleagues and they are saying that its not possible to use join in this situation. Is it possible to get the same result using joins? if yes, please tell me how.

    Read the article

  • Stop invalid data in a attribute with foreign key constraint using triggers?

    - by Eternal Learner
    How to specify a trigger which checks if the data inserted into a tables foreign key attribute, actually exists in the references table. If it exist no action should be performed , else the trigger should delete the inserted tuple. Eg: Consider have 2 tables R(A int Primary Key) and S(B int Primary Key , A int Foreign Key References R(A) ) . I have written a trigger like this : Create Trigger DelS BEFORE INSERT ON S FOR EACH ROW BEGIN Delete FROM S where New.A <> ( Select * from R;) ); End; I am sure I am making a mistake while specifying the inner sub query within the Begin and end Blocks of the trigger. My question is how do I make such a trigger ?

    Read the article

  • Should I use DATE and TIME as fields rather than DATETIME

    - by whitstone86
    I have a project on going for a TV guide, called mytvguide in the database - I use PHPMyAdmin. This is the structure for one table, which is called tvshow1: Field Type channel varchar(255) date date No airdate time No expiration time No episode varchar(255) setreminder varchar(255) but am not sure how to get DATE, TIME to work with the pagination script (below is the script, which works for the version with DATETIME): http://pastebin.com/6S1ejAFJ However, although the DATETIME one works - it shows programmes that air on the day itself like this: Programme 1 showing on Channel 1 2:35pm "Episode 2" Set Reminder Programme 1 showing on Channel 1 May 26th - 12:50pm "Episode 3" Set Reminder Programme 1 showing on Channel 1 May 26th - 5:55pm "Episode 3" Set Reminder but I'm not quite sure how to replicate that for the fields that use DATE, TIME functions as seen above. Any advice on this is appreciated, thanks!

    Read the article

  • Deleting rows from different tables

    - by Ross
    Here is what i'm trying to do: Delete the project from projects table and all the images associated with that project in the images table Lets say $del_id = 10 DELETE FROM projects, images WHERE projects.p_id = '$del_id' AND images.p_id = '$del_id' What is wrong with this query

    Read the article

  • update query with join on two tables

    - by dba_query
    I have customer and address tables. query: select * from addresses a, customers b where a.id = b.id returns 474 records For these records, I'd like to add the id of customer table into cid of address table. Example: If for the first record the id of customer is 9 and id of address is also 9 then i'd like to insert 9 into cid column of address table. I tried: update addresses a, customers b set a.cid = b.id where a.id = b.id but this does not seem to work.

    Read the article

  • Which of these queries is preferable?

    - by bread
    I've written the same query as a subquery and a self-join. Is there any obvious argument for one over the other here? SUBQUERY: SELECT prod_id, prod_name FROM products WHERE vend_id = (SELECT vend_id FROM products WHERE prod_id = ‘DTNTR’); SELF-JOIN: SELECT p1.prod_id, p1.prod_name FROM products p1, products p2 WHERE p1.vend_id = p2.vend_id AND p2.prod_id = ‘DTNTR’;

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >