Search Results

Search found 37012 results on 1481 pages for 'sql query'.

Page 309/1481 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • sql query question / count

    - by scheibenkleister
    Hi, I have houses that belongs to streets. A user can buy several houses. How do I find out, if the user owns an entire street? street table with columns (id/name) house table with columns (id/street_id [foreign key] owner table with columns (id/house_id/user_id) [join table with foreign keys] So far, I'm using count which returns the result: select count(*), street_id from owner left join house on owner.house_id = house.id group by street_id where user_id = 1 count(*) | street_id 3 | 1 2 | 2 A more general count: select count(*) from house group by street_id returns: count(*) | street_id 3 | 1 3 | 2 How can I find out, that user 1 owns the entire street 1 but not street 2? Thanks.

    Read the article

  • Insert array to mysql database php

    - by ganjan
    Hi. I want to add an array to my db. I have set up a function that checks if a value in the db (ex. health and money) has changed. If the value is diffrent from the original I add the new value to the $db array. Like this $db['money'] = $money_input + $money_db;. function modify_user_info($conn, $money_input, $health_input){ (...) if ($result = $conn->query($query)) { while ($user = $result->fetch_assoc()) { $money_db = $user["money"]; $health_db = $user["health"]; } $result->close(); //lag array til db med kolonnene som skal fylles ut som keys i array if ($user["money"] != $money_input){ $db['money'] = $money_input + $money_db; //0 - 20 if (!preg_match("/^[[0-9]{0,20}$/i", $db['money'])){ echo "error"; return false; } } if ($user["health"] != $health_input){ $db['health'] = $health_input + $health_db; //0 - 4 if (!preg_match("/^[[0-9]{0,4}$/i", $db['health'])){ echo "error"; return false; } if (($db['health'] < 1) or ($db['health'] > 1000)) { echo "error"; return false; } } The keys in $db represent colums in my database. Now I want to make a function that takes the keys in the array $db and insert them in the db. Something like this ? $query = "INSERT INTO `main_log` ( `id` , "; foreach(range(0, x) as $num) { $query .= array_key.", "; } $query = substr($query, 0, -3); $query .= " VALUES ('', "; foreach(range(0, x) as $num) { $query .= array_value.", "; } $query = substr($query, 0, -3); $query .= ")";

    Read the article

  • LINQ to SQL: making a "double IN" query crashes

    - by Alex
    I need to do the following thing: var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) && (from t2 in DB.Table2 where t2.Date >= DataTime.Now select t2.ID).Contains(c.ID) select a It doesn't want to run. I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. But when I try to run var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) select a OR var a = from c in DB.Customers where (from t2 in DB.Table2 where t2.Date = DataTime.Now select t2.ID).Contains(c.ID) select a It works! I'm sure that there both IN queries contain some customers ids.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • SQL query for getting count on same table using left outer join

    - by Sasi
    Hi all, I have a table from which i need to get the count grouped on two columns. the table has two columns one datetime column and another one is success value(-1,1,0) What i am looking for is something like this... count of success value for each month month----success-----count 11------- -1 ------- 50 11------- 1 --------- 50 11------- 0 ------- 50 12------- -1 ------- 50 12------- 1 ------- 50 12------- 0 ------- 50 if there is no success value for a month then the count should be null or zero. I have tried with left outer join as well but of no use it gives the count incorrectly. Thanks in advance Sasi

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Need SQL Server Stored Procedure for This Query

    - by djshortbus
    I have a ASPX.NET DataGrid and im trying to USE a select LIKE 'X'% from a table that has 1 field called location. im trying to display the locations that start with a certain letter (example wxxx,axxx,fxxx,) in different columns in my data grid. SELECT DISTINCT LM.LOCATION AS '0 LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE '0%' SELECT DISTINCT LM.LOCATION AS 'A LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE 'A%'

    Read the article

  • Need SQL Server Stored Procedure for This Query

    - by djshortbus
    I have a ASPX.NET DataGrid and im trying to USE a select LIKE 'X'% from a table that has 1 field called location. im trying to display the locations that start with a certain letter (example wxxx,axxx,fxxx,) in different columns in my data grid. SELECT DISTINCT LM.LOCATION AS '0 LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE '0%' SELECT DISTINCT LM.LOCATION AS 'A LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE 'A%'

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • "The data types text and nvarchar are incompatible in the equal to operator" in SQL Query

    - by kenom
    Why i get this error: The data types text and nvarchar are incompatible in the equal to operator. The field of "username" in database is text type... This is my soruce: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="my_answers.ascx.cs" Inherits="kontrole_login_my_answers" %> <div style=" margin-top:-1280px; float:left;"> <p></p> <div id="question"> Add question </div> </div> <asp:GridView ID="GridView1" runat="server" DataSourceID="SqlDataSource1" > </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:estudent_piooConnectionString %>" SelectCommand="SELECT * FROM [question] WHERE ([username] = @fafa)"> <SelectParameters> <asp:QueryStringParameter Name="fafa" QueryStringField="user" Type="String"/> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • writing an Dynamic query in sqlserver

    - by prince23
    hi, DECLARE @sqlCommand varchar(1000) DECLARE @columnList varchar(75) DECLARE @city varchar(75) DECLARE @region varchar(75) SET @columnList = 'first_name, last_name, city' SET @city = '''London''' SET @region = '''South''' SET @sqlCommand = 'SELECT ' + @columnList + ' FROM dbo.employee WHERE City = ' + @city and 'region = '+@region --and 'region = '+@region print(@sqlCommand) EXEC (@sqlCommand) when i run this command i get an error Msg 156, Level 15, State 1, Line 8 Incorrect syntax near the keyword 'and'. and help would great thank you

    Read the article

  • Query next/previous record

    - by Rob
    I'm trying to find a better way to get the next or previous record from a table. Let's say I have a blog or news table: CREATE TABLE news ( news_id INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT, news_datestamp DATETIME NOT NULL, news_author VARCHAR(100) NOT NULL, news_title VARCHAR(100) NOT NULL, news_text MEDIUMTEXT NOT NULL ); Now on the frontend I want navigation buttons for the next or previous records, if i'm sorting by news_id, I can do something rather simple like: SELECT MIN(news_id) AS next_news_id FROM news WHERE news_id > '$old_news_id' LIMIT 1 SELECT MAX(news_id) AS prev_news_id FROM news WHERE news_id < '$old_news_id' LIMIT 1 But the news can be sorted by any field, and I don't necessarily know which field is sorted on, so this won't work if the user sorts on news_author for example. I've resorted to the rather ugly and inefficient method of sorting the entire table and looping through all records until I find the record I need. $res = mysql_query("SELECT news_id FROM news ORDER BY `$sort_column` $sort_way"); $found = $prev = $next = 0; while(list($id) = mysql_fetch_row($res)) { if($found) { $next = $id; break; } if($id == $old_news_id) { $found = true; continue; } $prev = $id; } There's got to be a better way.

    Read the article

  • i need a query to retrieve the following constraint

    - by ANITHA
    I have the following tables and fields: +------------------+ +-------------------+ +---------------+ | Request | | RequestItem | | Item | +------------------+ +-------------------+ +---------------+ | + Requester_Name | | + Request_No | | + Item | +------------------+ +-------------------+ +---------------+ | + Request_No | | + Item | +------------------+ +-------------------+ I would like to filter the items which are selected under a particular request number, along with a specific requester name. How might I go about doing this?

    Read the article

  • sql statement supposed to have 2 distinct rows, but only 1 is returned

    - by jello
    I have an sql statement that is supposed to return 2 rows. the first with psychological_id = 1, and the second, psychological_id = 2. here is the sql statement select * from psychological where patient_id = 12 and symptom = 'delire'; But with this code, with which I populate an array list with what is supposed to be 2 different rows, two rows exist, but with the same values: the second row. OneSymptomClass oneSymp = new OneSymptomClass(); ArrayList oneSympAll = new ArrayList(); string connStrArrayList = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; string queryStrArrayList = "select * from psychological where patient_id = " + patientID.patient_id + " and symptom = '" + SymptomComboBoxes[tag].SelectedItem + "';"; using (var conn = new SqlConnection(connStrArrayList)) using (var cmd = new SqlCommand(queryStrArrayList, conn)) { conn.Open(); using (SqlDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { oneSymp.psychological_id = Convert.ToInt32(rdr["psychological_id"]); oneSymp.patient_history_date_psy = (DateTime)rdr["patient_history_date_psy"]; oneSymp.strength = Convert.ToInt32(rdr["strength"]); oneSymp.psy_start_date = (DateTime)rdr["psy_start_date"]; oneSymp.psy_end_date = (DateTime)rdr["psy_end_date"]; oneSympAll.Add(oneSymp); } } conn.Close(); } OneSymptomClass testSymp = oneSympAll[0] as OneSymptomClass; MessageBox.Show(testSymp.psychological_id.ToString()); the message box outputs "2", while it's supposed to output "1". anyone got an idea what's going on?

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Oracle SQL: ROLLUP not summing correctly

    - by tommy-o-dell
    Hi guys, Rollup seems to be working correcly to count the number of units, but not the number of trains. Any idea what could be causing that? The output from the query looks like this. The sum of the Units column in yellow is 53 but the rollup is showing 51. The number of units adds up correctly though... And here's the oracle SQL query... select t.year, t.week, decode(t.mine_id,NULL,'PF',t.mine_id) as mine_id, decode(t.product,Null,'LF',t.product) as product, decode(t.mine_id||'-'||t.product,'-','PF',t.mine_id||'-'||t.product) as code, count(distinct t.tpps_train_id) as trains, count(1) as units from ( select trn.mine_code as mine_id, trn.train_tpps_id as tpps_train_id, round((con.calibrated_weight_total - con.empty_weight_total),2) as tonnes from widsys.train trn INNER JOIN widsys.consist con USING (train_record_id) where trn.direction = 'N' and (con.calibrated_weight_total-con.empty_weight_total) > 10 and trn.num_cars > 10 and con.consist_no not like '_L%' ) w, ( select to_char(td.datetime_act_comp_dump-7/24, 'IYYY') as year, to_char(td.datetime_act_comp_dump-7/24, 'IW') as week, td.mine_code as mine_id, td.train_id as tpps_train_id, pt.product_type_code as product from tpps.train_details td inner join tpps.ore_products op using (ore_product_key) inner join tpps.product_types pt using (product_type_key) where to_char(td.datetime_act_comp_dump-7/24, 'IYYY') = 2010 and to_char(td.datetime_act_comp_dump-7/24, 'IW') = 12 order by td.datetime_act_comp_dump asc ) t where w.mine_id = t.mine_id and w.tpps_train_id = t.tpps_train_id having t.product is not null or t.mine_id is null group by t.year, t.week, rollup( t.mine_id, t.product)

    Read the article

  • sql insert query needed

    - by masfenix
    Hey guys, so I have two tables. They are pictured below. I have a master table "all_reports". And a user table "user list". The master table may have users that do not exist in the user list. I need to add them to the user list. The master table may have duplicates in them (check picture). The master list does not contain all the information that the user list requires (no manager, no HR status, no department.. again check picture).

    Read the article

  • Sql server 2008 query

    - by Prashant
    I am trying to implement versioning of data I have two tables Client and Address. I have to display in the UI, the various updates in the order in which they were made but with the correct client version so, Client Table Address Table ---------- ---------- Client Version Modified Date Address Version ModifiedDate CV1 T1 AV1 T2 CV2 T4 AV2 T3 CV3 T5 My result should be CV1 AV1 (first version) CV1 AV2 (as AV1 was updated at T3) CV2 AV2 (as Client got updated to CV2 at T4) CV3 AV2 (As client has got updated at T5)

    Read the article

  • MySQL SUM Query daily values of a week problem

    - by davykiash
    Am trying to return the sum of each day of a week in mysql but it returns nothing despite having values for the 3rd Week of March 2010 SELECT SUM(expense_details_amount) AS total FROM expense_details WHERE YEAR(expense_details_date) = '2010' AND MONTH(expense_details_date) = '03' AND WEEK(expense_details_date) = '3' GROUP BY DAY(expense_details_date) How do I go about this?

    Read the article

  • Insert query results into table in ms access 2010

    - by CodeMed
    I need to transform data from one schema into another in an MS Access database. This involves writing queries to select data from the old schema and then inserting the results of the queries into tables in the new schema. The below is an example of what I am trying to do. The SELECT component of the below works fine, but the INSERT component does not work. Can someone show me how to fix the below so that it effectively inserts the results of the SELECT statement into the destination table? INSERT INTO CompaniesTable (CompanyName) VALUES ( SELECT DISTINCT IIF(a.FIRM_NAME IS NULL, b.SUBACCOUNT_COMPANY_NAME, a.FIRM_NAME) AS CompanyName FROM (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS a LEFT JOIN (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS b ON a.ContactID = b.ContactID ); The definition of the target table (CompaniesTable) is: CompanyID Autonumber CompanyName Text Description Text WebSite Text Email Text TypeNumber Number

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >