Search Results

Search found 84007 results on 3361 pages for 'sql system table'.

Page 330/3361 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • LinqDataSource wizard table list is not refreshing after updating LinqToSql classes

    - by dotnet_learner
    I have changed my dbml file like this. I have deleted all the tables and stored procs. I added new tables and stored procs from a new database. In the code-behind, I can access the new tables and stored procs. However, in the LinqDataSource using the same dbContext when I'm trying to configure the LinqDataSource. I can see all the old tables in the wizard drop-down. How to refresh the the wizard drop-down so that I can select the newly added tables? Deleting the old LinqDataSourceand adding a new one is not working.

    Read the article

  • PHP PDO SQL Server Select statement not replacing question marks

    - by Metropolis
    Awhile ago I wrote a database class which uses PDO in order to connect to SQL Server databases and also to MySQL databases. It has always replaced the question marks fine when using it on the MySQL databases, but for the SQL Server database I had to create a work around which basically replaces the question marks manually. Here is the code for that. if($this->getPDODriver() == 'odbc' && !empty($values_a) && substr_count($query_s, "?") > 0) { $query_s = preg_replace(array_fill(0, substr_count($query_s, "?"), '/\?/'), $values_a, $query_s, 1); $values_a = NULL; } Now, I understand that this completely defeats the purpose of the question marks and PDO, but it has been working fine for me. What I would like to do now though, is find out why the question marks are not getting replaced in the first place, and remove this workaround. If I have a select statement like the following SELECT * FROM database WHERE value = ? That is what the query looks like when I go to prepare it, but when I display the query results, it is a blank array. Just remember, this class is working fine with MySQL, and it is working fine with the work around above. So I know it has something to do with the question marks.

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; what's going on here?

    Read the article

  • Problem with Linq query and date format

    - by Alan T
    Hi All I have a C# console application written using Visual Studio 2008. My system culture is en-GB. I have a Linq query that looks like this: var myDate = "19-May-2010"; var cus = from x in _dataContext.testTable where x.CreateDate == Convert.ToDateTime(myDate) select x; The resulting SQL query generates and error because it returns the dates as "19/05/2010" which it interprets as an incorrect date. For some reason even though my system culture is set to en-GB it looks like it's trying to intrepret it as a en-US date. Any ideas how I get around this? Thanks. Alan T

    Read the article

  • One table, need multiple values from different rows/tuples

    - by WmasterJ
    I have tables like: 'profile_values' userID | fid | value -------+---------+------- 1 | 3 | [email protected] 1 | 45 | 203-234-2345 3 | 3 | [email protected] 1 | 45 | 123-456-7890 And: 'users' userID | name -------+------- 1 | joe 2 | jane 3 | jake I want to join them and have one row with two of the values like: 'profile_values' userID | name | email | phone -------+-------+----------------+-------------- 1 | joe | [email protected] | 203-234-2345 2 | jane | [email protected] | 123-456-7890 I have solved it but it feels clumsy and I want to know if there is a better way to do it. Meaning solutions that are either more readable or faster(optimized) or simply best-practice. Current solution: multiple tables selected, many conditional statements: SELECT u.userID AS memberid, u.name AS first_name, pv1.value AS fname, pv2.value as lname FROM users AS u, profile_values AS pv1, profile_values AS pv2, WHERE u.userID = pv1.userID AND pv1.fid = 3 AND u.userID = pv2.userID AND pv2.fid = 45; Thanks for the help!

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • How to get max row from table

    - by Odette
    HI I have the following code and a massive problem: WITH CALC1 AS ( SELECT OTQUOT, OTIT01 AS ITEMS, ROUND(OQCQ01 * OVRC01,2) AS COST FROM @[email protected] WHERE OTIT01 <> '' UNION ALL SELECT OTQUOT, OTIT02 AS ITEMS, ROUND(OQCQ02 * OVRC02,2) AS COST FROM @[email protected] WHERE OTIT02 <> '' UNION ALL SELECT OTQUOT, OTIT03 AS ITEMS, ROUND(OQCQ03 * OVRC03,2) AS COST FROM @[email protected] WHERE OTIT03 <> '' UNION ALL SELECT OTQUOT, OTIT04 AS ITEMS, ROUND(OQCQ04 * OVRC04,2) AS COST FROM @[email protected] WHERE OTIT04 <> '' UNION ALL SELECT OTQUOT, OTIT05 AS ITEMS, ROUND(OQCQ05 * OVRC05,2) AS COST FROM @[email protected] WHERE OTIT05 <> '' ORDER BY OTQUOT ASC ) SELECT OTQUOT, ITEMS, MAX(COST) FROM CALC1 WHERE OTQUOT = '04886471' GROUP BY OTQUOT, ITEMS result: 04886471 FEPO5050WCGA24 13.21 04886471 GFRK1650SGL 36.21 04886471 FRA7500GA 12.6 04886471 CGIFESHAZ 11.02 04886471 CGIFESHPDPR 11.79 04886471 GFRK1350DBL 68.23 04886471 RET1.63825GP 32.55 04886471 FRSA 0.12 04886471 GFRK1350SGL 55.94 04886471 GFRK1650DBL 71.89 04886471 FEPO6565WCGA24 16.6 04886471 PCAP5050GA 0.28 04886471 FEPO6565NCPAG24 0.000000 How can I get the result of the row with the Itemcode that has the highest value? In this case I need the result: 04886471 GFRK1650DBL 71.89 but i dont know how to change my code to get that - can anybody please help me?

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Selecting a good SQL Server 2008 spatial index with large polygons

    - by andynormancx
    I'm having some fun trying to pick a decent SQL Server 2008 spatial index setup for a data set I am dealing with. The dataset is polygons, representing contours over the whole globe. There are 106,000 rows in the table, the polygons are stored in a geometry field. The issue I have is that many of the polygons cover a large portion of the globe. This seems to make it very hard to get a spatial index that will eliminate many rows in the primary filter. For example, look at the following query: SELECT "ID","CODE","geom".STAsBinary() as "geom" FROM "dbo"."ContA" WHERE "geom".Filter( geometry::STGeomFromText('POLYGON ((-142.03193662573682 59.53396984952896, -142.03193662573682 59.88928136451884, -141.32743833481925 59.88928136451884, -141.32743833481925 59.53396984952896, -142.03193662573682 59.53396984952896))', 4326) ) = 1 This is querying an area which intersects with only two of the polygons in the table. No matter what combination of spatial index settings I chose, that Filter() always returns around 60,000 rows. Replacing Filter() with STIntersects() of course returns just the two polygons I want, but of course takes much longer (Filter() is 6 seconds, STIntersects() is 12 seconds). Can anyone give me any hints on whether there is a spatial index setup that is likely to improve on 60,000 rows or is my dataset just not a good match for SQL Server's spatial indexing ?

    Read the article

  • Database designing for a bug tracking system in SQL

    - by Yasser
    I am an ASP.NET developer and I really dont do such of database stuff. But for my open source project on Codeplex I am required to setup a database schema for the project. So reading from here and there I have managed to do the following. As being new to database schema designing, I wanted some one else who has a better idea on this topic to help me identify any issues with this design. Most of the relationships are self explanatory I think, but still I will jot each one down. The two keys between UserProfile and Issues are for relationships between UserId and IssueCreatedBy and IssueClosedBy Thanks

    Read the article

  • attaching linq to sql datacontext to httpcontext in business layer

    - by rap-uvic
    Hello all, I need my linq to sql datacontext to be available across my business/data layer for all my repository objects to access. However since this is a web app, I want to create and destroy it per request. I'm wondering if having a singleton class that can lazily create and attach the datacontext to current HttpContext would work. My question is: would the datacontext get disposed automatically when the request ends? Below is the code for what I'm thinking. Would this accomplish my purpose: have a thread-safe datacontext instance that is lazily available and is automatically disposed when the request ends? public class SingletonDC { public static NorthwindDataContext Default { get { NorthwindDataContext defaultInstance = (NorthwindDataContext)System.Web.HttpContext.Current.Items["datacontext"]; if (defaultInstance == null) { defaultInstance = new NorthwindDataContext(); System.Web.HttpContext.Current.Items.Add("datacontext", defaultInstance); } return defaultInstance; } } }

    Read the article

  • Tool for generating flat files from SQL objects dynamically

    - by Fabio Gouw
    Hello! I'm looking for a tool or component that generates flat files given a SQL Server's query result (from a stored procedure or a SELECT * over a table or view). This will be a batch process which runs every day, and every day a new file is created. I could use SQL Server Integration Services (DTS), but I have a mandatory requirement: the output of the file need to be dynamic. If a new column is added in my query result, the file must have this new column too, without having to modify my SSIS package. If a column is removed, then the flat file no longer will have it. I’ve tried to do this with SSIS, but when I create a new package I need to specify the number of columns. Another requirement is configuring the format of the output, depending on the data type of the column. If it’s a datetime, the format needs to be YYYY-MM-DD. If it’s a float, then I need to use 2 decimal digits, and so on. Does anyone know a tool that does this job? Thanks

    Read the article

  • SQL Server Compact timed out waiting for a lock

    - by jankhana
    Hi all, I'm having an application in that i use Sql Compact 3.5 with VS2008. I'm running multiple threads in my application which contacts the compact database and accesses the row. It selects and deletes those rows in a fashion i.e selecting and giving to the application 5 rows and deleting those rows from the table. It works great with a single thread but if i use multiple threads i.e if 3 or more threads are running I get very often the TimeOut Error!!! I have increased the Time out property in the connection string but it didn't give me expected result. The error log is as follow: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 5,Thread id = 4204,Process id = 4808,Table name = XXX,Conflict type = x lock (s blocks),Resource = TAB ] The Query that I use to retrieve is as follows: " select Top(5) * from TableName order by id; delete from TableName where id in(select top(5) id from TableName order by id); " Is there any way by which we can avoid this Time Out exception??????? The above query I un as a transaction in VS2008 one using SQLCECommand and the other using SqlCEDataAdapter. Any Idea!!!!!! Reply

    Read the article

  • Improving SQL Code

    - by jeremib
    I'm using Pervasive SQL. I have the following UNION of mulitple SQL statements. Is there a way to clean this up, especially the Pay Date an the Loc No fields that are selected in each statement. Is there a way to pull this out and have only one place to need to change those two fields? ( SELECT '23400' as Gl_Number, y.Plan as Description, 0 as Hours, ROUND(SUM(Ee_Curr),2) as Debit, 0 as Credit FROM "PR_YLOC" y LEFT JOIN PR_SUMM s ON (s.Summ_No = y.Summ_No) WHERE y.Loc_No = 1041 AND s.Pay_Date = '2010-04-02' AND y.Code IN (100, 105, 110) AND y.Type = 3 GROUP BY y.Plan ) UNION ( SELECT '72000' as Gl_Number, y.Plan, 0, ROUND(SUM(Er_Curr),2), 0 FROM "PR_YLOC" y LEFT JOIN PR_SUMM s ON (s.Summ_No = y.Summ_No) WHERE y.Loc_No = 1041 AND s.Pay_Date = '2010-04-02' AND y.Code IN (100, 105, 110) AND y.Type = 3 GROUP BY y.Plan ) UNION ( SELECT '24800', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 100 GROUP BY c.Plan ) UNION ( SELECT '24800', c.Plan, 0, 0, ROUND(SUM(Ee_Amt),2) FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 115 GROUP BY c.Plan ) UNION ( SELECT '24150', c.Plan, 0, 0, ROUND(SUM(Ee_Amt),2) FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 241 GROUP BY c.Plan ) UNION ( SELECT '24150', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 239 GROUP BY c.Plan ) UNION ( SELECT '24120', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 230 GROUP BY c.Plan ) UNION ( SELECT '24100', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 225 GROUP BY c.Plan ) UNION ( SELECT '23800', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 245 GROUP BY c.Plan ) UNION ( select m.Def_Dept as Gl_Number, t.Short_Desc, (SELECT SUM(Hours) FROM pr_earn en WHERE en.Loc_No = e.Loc_No AND en.Emp_No = e.Emp_No AND en.Pay_Date = e.Pay_Date AND en.Pay_Code = e.Pay_Code) as Hours, (SELECT SUM(Pay_Amt) FROM pr_earn en WHERE en.Loc_No = e.Loc_No AND en.Emp_No = e.Emp_No AND en.Pay_Date = e.Pay_Date AND en.Pay_Code = e.Pay_Code) as Debit, 0 from pr_earn e left join pr_mast m on (e.Loc_No = m.Loc_No and e.Emp_No = m.Emp_No) left join pr_ptype t ON (t.Code = e.Pay_Code) where e.loc_no = 1041 and e.pay_date = '2010-04-02' group by m.Def_Dept, t.Short_Desc ) Thanks

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • Re-indexing table; update with from

    - by David Thorisson
    The query says it all, I can't find out the right syntax without without using a for..next UPDATE Webtree SET Webtree.Sorting=w2.Sorting FROM ( SELECT BranchID, CASE WHEN @Index>=ROW_NUMBER() OVER(ORDER BY Sorting ASC) THEN ROW_NUMBER() OVER(ORDER BY Sorting ASC) ELSE ROW_NUMBER() OVER(ORDER BY Sorting ASC)+1 END AS Sorting FROM Webtree w2 WHERE w2.ParentID=@ParentID ) WHERE Webtree.BranchID=w2.BranchID

    Read the article

  • SQL command to get field of a maximum value, without making two select

    - by António Capelo
    I'm starting to learn SQL and I'm working on this exercise: I have a "books" table which holds the info on every book (including price and genre ID). I need to get the name of the genre which has the highest average price. I suppose that I first need to group the prices by genre and then retrieve the name of the highest.. I know that I can get the results GENRE VS COST with the following: select b.genre, round(avg(b.price),2) as cost from books b group by b.genre; My question is, to get the genre with the highest AVG price from that result, do I have to make: select aux.genre from ( select b.genre, round(avg(b.price),2) as cost from books b group by b.genre ) aux where aux.cost = (select max(aux.cost) from ( select b.genre, round(avg(b.price),2) as cost from books l group by b.genre ) aux); Is it bad practice or isn't there another way? I get the correct result but I'm not confortable with creating two times the same selection. I'm not using PL SQL so I can't use variables or anything like that.. Any help will be appreciated. Thanks in advance!

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • DataGridView not displaying a row after it is created

    - by joslinm
    Hi, I'm using Visual Studio 10 and I just created a Database using SQL Server CE. Within it, I made a table CSLDataTable and that automatically created a CSLDataSet & CSLDataTableTableAdapter. The three variables were automatically created in my MainWindow.cs class: cSLDataSet cSLDataTableTableAdapter cSLDataTableBindingSource I have added a DataGridView in my Form called dataGridView and datasource cSLDataTableBindingSource. In my MainWindow(), I tried adding a row as a test: public MainWindow() { InitializeComponent(); CSLDataSet.CSLDataTableRow row = cSLDataSet.CSLDataTable.NewCSLDataTableRow(); row.File_ = "file"; row.Artist = "artist11"; row.Album = "album"; row.Save_Structure = "save"; row.Sent = false; row.Error = true; row.Release_Format = "release"; row.Bit_Rate = "bitrate.."; row.Year = "year"; row.Physical_Format = "format"; row.Bit_Format = "bitformat"; row.File_Path = "File!!path"; row.Site_Origin = "what"; cSLDataSet.CSLDataTable.Rows.Add(row); cSLDataSet.AcceptChanges(); cSLDataTableTableAdapter.Fill(cSLDataSet.CSLDataTable); cSLDataTableTableAdapter.Update(cSLDataSet); dataGridView.Refresh(); dataGridView.Update(); } In regards to the DataSet methods I tried calling, I had been trying to find a "correct" way to interact with the adapter, dataset, and datatable to successfully show the row, but to no avail. I'm rather new to using SQL Server CE Database, and read a lot of the MSDN sites & thought I was on the right track, but I've had no luck. The DataGridView shows the headers correctly, but that new row does not show up.

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • SQL: Need help with query construction.

    - by Geeknidas
    Hi Guys, I am relatively new with sql and I need some help with some basic query construction. Problem: To retrieve the number of orders and the customer id from a table based on a set of parameters. I want to write a query to figure out the number of orders under each customer (Column: Customerid) along with the CustomerID where the number of orders should be greater or equal to 10 and the status of the order should be Active. Moreover, I also want to know the first transaction date of an order belonging to each customerid. Table Description: product_orders Orderid CustomerId Transaction_date Status ------- ---------- ---------------- ------- 1 23 2-2-10 Active 2 22 2-3-10 Active 3 23 2-3-10 Deleted 4 23 2-3-10 Active Query that I have written: select count(*), customerid from product_orders where status = 'Active' GROUP BY customerid ORDER BY customerid; The above statement gives me the sum of all order under a customer id but does not fulfil the condition of atleast 10 orders. I donot know how to display the first transaction date along with the order under a customerid (status: could be active or delelted doesn't matter) Ideal solutions should look like: Total Orders CustomerID Transaction Date (the first transaction date) ------------ ---------- ---------------- 11 23 1-2-10 Thanks in advance. I hope you guys would be kind enough to stop by and help me out. Cheers, Leonidas

    Read the article

  • Efficient way to update SQL 'relationship' table

    - by AmbroseChapel
    Say I have three properly normalised tables. One of people, one of qualifications and one mapping people to qualifications: People: id | Name ---------- 1 | Alice 2 | Bob Degrees: id | Name --------- 1 | PhD 2 | MA People-to-degrees: person_id | degree_id --------------------- 1 | 2 # Alice has an MA 2 | 1 # Bob has a PhD So then I have to update this mapping via my web interface. (I made a mistake. Bob has a BA, not a PhD, and Alice just got her B Eng.) There are four possible states of these one-to-many relationship mappings: was true before, should now be false was false before, should now be true was true before, should remain true was false before, should remain false what I don't want to do is read the values from four checkboxes, then hit the database four times to say "Did Bob have a BA before? Well he does now." "Did Bob have PhD before? Because he doesn't any more" and so on. How do other people address this issue? I'm curious to see if someone else arrives at the same solution I did.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >