Search Results

Search found 38660 results on 1547 pages for 'sql index'.

Page 498/1547 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • TSQL 'Invalid column name' error on value of sproc parameter

    - by Daria
    here's my code: DECLARE @SQL varchar(600) SET @SQL = 'SELECT CategoryID, SubCategoryID, ReportedNumber FROM tblStatistics WHERE UnitCode = ' + @unitCode + ' AND FiscYear = ' + @currYEAR EXEC (@SQL) When i run this sproc with unitCode = 'COB' and currYEAR = '10', i get the following error: Invalid column name 'COB'. Does anyone know why? thx!

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • SQL Server backward compatibility in Entity Framework?

    - by shake
    Is there any backward compatibility in the entity framework between SQL Server 2008 and 2005? It seems the framework forces you to use the same provider for all the .edmx files in a solution. If you use the 2008 provider, data types like DateTime2 and functions like SysDateTime that are emitted by the framework to the underlying SQL query make it useless to use them against a SQL 2005 Server. Any way around this?

    Read the article

  • Need help with ConnectionString value in C#

    - by SzamDev
    Hi I have an idea and I want to apply it to my Application (C# .NET). When we connect to a DB (MS SQL Server 2008) in VS 2008, the ConnectionString saved in the Application Setting and it's a static varriable (no one can edit it unless you edit it inside VS 2008). I want a way to let my Application search for MS SQL Server and save it to Application Setting and use it to connect to my DB Programmatically. When my application start, the first thing to do is checking the ConnectionString if vaild, NOT Empty and test connection to MS SQL Server Successfully so if there is a proplem I think to show a window form to let the user enter some data like username and password for MS SQL Server 2008 Is there any way to do it?

    Read the article

  • Oracle Data Pump import to a sql file error :ORA-31655 no data or metadata objects

    - by Francisco Quiñones
    Hello, I'm using Data Pump to export/import data, one requirement is to import data to a sql file. The OS is window. I made the follow export : expdp system/password directory=dpump_dir dumpfile=tablesdump.dmp content=DATA_ONLY tables=user.tablename and it works, I can see the file TABLESDUMP.DMP in the directory path. then when I tried to import it to a sql file: impdp system/password directory=dpump_dir dumpfile=tablesdump.dmp sqlfile=tables_export.sql the log show : ..... ORA-31655 no data or metadata objects selected for job ..... and the sql file is created empty in the directory path. I'm not DBA, I'm a Java developer , Can you help me? Thks

    Read the article

  • Unless I start Management Studio with "Run as administrator", I get a Login Failure (Error 18456)

    - by MedicineMan
    I cannot connect to the SQL Server instance if I do not start management studio as a administrator. I am running windows 7, SQL Server 2008, and Management Studio 10.0. If I run as a normal user, the error I get is: Cannot connect to .. Additional information: login failed for user 'COMPUTERNAME\MyUserName'. (Microsoft SQL Server, Error 18456) for server name I have tried the following: . localhost COMPUTERNAME

    Read the article

  • Which DB? Linq to SQL classes

    - by gadirzade
    Hi. I wont to devolope multiple user supported accounting management aplication in c#.I wont to use linq to sql clases.LINQ to SQL only supports SQL Server/Compact however it is possible the SQLite people have written their own LINQ provider given the name of the assembly.I have to use Db that is FREE.Which DBMS do you prefer to me?

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • Complex derived attributes in Django models

    - by rabidpebble
    What I want to do is implement submission scoring for a site with users voting on the content, much like in e.g. reddit (see the 'hot' function in http://code.reddit.com/browser/sql/functions.sql). My submission model currently keeps track of up and down vote totals. Currently, when a user votes I create and save a related Vote object and then use F() expressions to update the Submission object's voting totals. The problem is that I want to update the score for the submission at the same time, but F() expressions are limited to only simple operations (it's missing support for log(), date_part(), sign() etc.) From my limited experience with Django I can see 4 options here: extend F() somehow (haven't looked at the code yet) to support the missing SQL functions; this is my preferred option and seems to fit within the Django framework the best define a scoring function (much like reddit's 'hot' function) in my database, and have Django use the value of that function for the value of the score field; as far as I can tell, #2 is not possible wrap my two step voting process in a suitably isolated transaction so that I can calculate the voting totals in Python and then update the Submission's voting totals without fear that another vote against the submission could be added/changed in the meantime; I'm hesitant to take this route because it seems overly complex - what is a "suitably isolated transaction" in this case anyway? use raw SQL; I would prefer to avoid this entirely -- what's the point of an ORM if I have to revert to SQL for such a common use case as this! (Note that this coming from somebody who loves sprocs, but is using Django for ease of development.) Before I embark on this mission to extend F() (which I'm not sure is even possible), am I about to reinvent the wheel? Is there a more standard way to do this? It seems like such a common use case and yet in an hour of searching I have yet to find a common solution...

    Read the article

  • storing HTML5 webdatabase table data to sql server periodically

    - by DotnetSparrow
    I want to read data from html5 web database and post to sql server using vb.net server side. I have used following link as reference: http://www.html5rocks.com/en/tutorials/webdatabase/todo/ I have created a web database and created table and stored some data in it. Now I want to store this data to sql server. How to get the data from html 5 web database and post to sql server each hour ( using some js timer). Please suggest solution

    Read the article

  • MS Access 2003 - Save button enabling on form open on different tabs

    - by Justin
    I have a tab control on a form, and a couple different tabs have save buttons on them. Once the user saves data (via SQL statements in VBA), I set the .enabled = false so that they cannot use this button again until moving to a brand new record (which is a button click on the overall form). so when my form open i was going to reference a sub that enabled all these save buttons because the open event would mean new record. though i get an error that says it either does not exist, or is closed. any ideas? thanks EDIT: Sub Example() error handling Dim db as dao.database dim rs as dao.recordset dim sql as string SQL = "INSERT INTO tblMain (Name, Address, CITY) VALUES (" if not isnull (me.name) then sql = sql & """" & me.name & """," else sql = sql & " NULL," end if if not insull(me.adress) then sql = sql & " """ & me.address & """," else sql = sql & " NULL," end if if not isnull(me.city) then sql = sql & " """ & me.city & """," else sql = sql & " NULL," end if 'debug.print(sql) set db = currentdb db.execute (sql) MsgBox "Changes were successfully saved" me.MyTabCtl.Pages.Item("SecondPage").setfocus me.cmdSaveInfo.enabled = false and then on then the cmdSave needs to get re enabled on a new record (which by the way, this form is unbound), so it all happens when the form is re-opened. I tried this: Sub Form_Open() me.cmdSaveInfo.enabled = true End Sub and this is where I get the error stated above. So this is also not the tab that has focus when the form opens. Is that why I get this error? I cannot enable or disable a control when the tab is not showing?

    Read the article

  • Error with varchar(max) column when using net.sourceforge.jtds.jdbc.Driver

    - by Rihan Meij
    Hi I have a MS SQL database running (MS SQL 2005) and am connecting to it via the net.sourceforge.jtds.jdbc.Driver. The query works fine for all the columns except one that is a varchar(max). Any ideas how to get around this issues? I am using the jdbc driver to run a data index into a SOLR implementation. (I do not control the database, so the first prize solution would be where I can tweak the SQL command to get the desired results) Thanks

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Full text searching in SQL Server 2008 Express Advanced

    - by Iain Macleod
    Hi, I have recently installed SQL Server 2008 Express Edition with Advanced Services on XP Pro but am having trouble getting full text searching to work with an restored database. The database was originally created in SQL Server 2005. When I call a stored proc that uses the full text index then I get the following error: Full-Text Search is not installed, or a full-text component cannot be loaded. This is my db version: Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (Intel X86) Jul 9 2008 14:43:34 Copyright (c) 1988-2008 Microsoft Corporation Express Edition with Advanced Services on Windows NT 5.1 (Build 2600: Service Pack 3) When I run: SELECT DATABASEPROPERTY('DBNAME','ISFULLTEXTENABLED') I get: 1 Also, when I look in the advanced properties for the db server in Management Studio I see both the "Default Full-Text Language" and "Full-Text Upgrade Option" properties. However, when I go to SQL Server Configuration Manager I don't see the "MSSQLFDLauncher" service. Does anyone know how to get this working? Cheers, Iain

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • why cant I more than two values from 3 different tables in one query

    - by zurna
    This is strange. In the news details page, I want to take a few different values from different tables with one query. However, for some strange reason, I only get two values back. So the outcome is like: <rows> <row id="4"> <FullName>Efe Tuncel</FullName> <CategoryName/> <Title>Runway Report</Title> <ShortDesc/> <Desc></Desc> <Date/> </row> </rows> If I disable fullname, then I get shortdesc but not others. Same things happens with others. NewsID = Request.QueryString("NEWSID") SQL = "SELECT N.NewsID, N.MembersID, N.CategoriesID, N.ImagesID, N.NewsTitle, N.NewsShortDesc, N.NewsDesc, N.NewsActive, N.NewsDateEntered, C.CategoriesID, C.CategoriesName, M.MembersID, M.MembersFullName" SQL = SQL & " FROM News N, Categories C, Members M" SQL = SQL & " WHERE N.NewsID = "& NewsID &" AND N.NewsActive = 1 AND N.MembersID = M.MembersID AND N.CategoriesID = C.CategoriesID" Set objViewNews = objConn.Execute(SQL) With Response .Write "<?xml version='1.0' encoding='windows-1254' ?>" .Write "<rows>" End With With Response .Write "<row id='"& objViewNews("NewsID") &"'>" .Write "<FullName>"& objViewNews("MembersFullName") &"</FullName>" .Write "<CategoryName>"& objViewNews("CategoriesName") &"</CategoryName>" .Write "<Title>"& objViewNews("NewsTitle") &"</Title>" .Write "<ShortDesc>"& objViewNews("NewsShortDesc") &"</ShortDesc>" .Write "<Desc><![CDATA["& objViewNews("NewsDesc") &"]]></Desc>" .Write "<Date>"& objViewNews("NewsDateEntered") &"</Date>" .Write "</row>" End With With Response .Write "</rows>" End With objViewNews.Close Set objViewNews = Nothing

    Read the article

  • Run Sql file in MYSQL PHPMyadmin

    - by Dev
    Hi All, I have written the SQL file with on excecuting it is throwing the error as mysql @"C:\Documents and Settings\Hemant\Desktop\create_tables.sql"; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@"C:\ Documents and Settings\Hemant\Desktop\create_tables.sql"' at line 1 on line 1 code is CREATE DATABASE IF NOT EXISTS test; DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci; USE test; please let me know is i am missing something

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Is SQL used by PDO database independent?

    - by Pheter
    Different databases have slight variations in their implementations of SQL. Does PDO handle this? If I write an SQL query that I use with PDO to access a MySQL database, and later tell PDO to start using a different type of database, will the query stop working? Or will PDO 'convert' the query so that it continues to work? If PDO does not do this, are there any PHP libraries that allow me to write SQL according to a particular syntax, and then the library will handle converting the SQL so that it will run on different databases?

    Read the article

  • Is it a oop good design ?

    - by remi bourgarel
    Hi all, I'd like to know what you think about this part of our program is realized : We have in our database a list of campsite. Partners call us to get all the campsites near a GPS location or all the campsites which provide a bar (we call it a service). So how I realized it ? Here is our database : Campsite - ID - NAME - GPS_latitude - GPS_longitude CampsiteServices -Campsite_ID -Services_ID So my code (c# but it's not relevant, let say it's an OO language) looks like this public class SqlCodeCampsiteFilter{ public string SqlCode; public Dictionary<string, object> Parameters; } interface ISQLCampsiteFilter{ SqlCodeEngineCore CreateSQLCode(); } public class GpsLocationFilter : ISQLCampsiteFilter{ public float? GpsLatitude; public float? GpsLongitude; public SqlCodeEngineCore CreateSQLCode() { --return an sql code to filter on the gps location like dbo.getDistance(@gpsLat,@gpsLong,campsite.GPS_latitude,campsite.GPS_longitude) with the parameters } } public class ServiceFilter : : ISQLCampsiteFilter{ public int[] RequiredServicesID; public SqlCodeEngineCore CreateSQLCode() { --return an sql code to filter on the services "where ID IN (select CampsiteServices.Service_ID FROm CampsiteServices WHERE Service_ID in ...) } } So in my webservice code : List<ISQLFilterEngineCore> filters = new List<ISQLFilterEngineCore>(); if(gps_latitude.hasvalue && gps_longitude.hasvalue){ filters.Add (new GpsLocationFilter (gps_latitude.value,gps_longitude.value)); } if(required_services_id != null){ filters.Add (new ServiceFilter (required_services_id )); } string sql = "SELECT ID,NAME FROM campsite where 1=1" foreach(ISQLFilterEngineCore aFilter in filters){ SqlCodeCampsiteFilter code = aFilter.CreateSQLCode(); sql += code.SqlCode; mySqlCommand.AddParameters(code.Parameters);//add all the parameters to the sql command } return mySqlCommand.GetResults(); 1/ I don't use ORM for the simple reason that the system exists since 10 years and the only dev who is here since the beginning is starting to learn about difference between public and private. 2/ I don't like SP because : we can do override, and t-sql is not so funny to use :) So what do you think ? Is it clear ? Do you have any pattern that I should have a look to ? If something is not clear please ask

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >