Search Results

Search found 27723 results on 1109 pages for 'sql puzzle'.

Page 231/1109 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • Need help in SQL and Sequel involving inner join and where/filter

    - by mhd
    Need help transfer sql to sequel: SQL: SELECT table_t.curr_id FROM table_t INNER JOIN table_c ON table_c.curr_id = table_t.curr_id INNER JOIN table_b ON table_b.bic = table_t.bic WHERE table_c.alpha_id = 'XXX' AND table_b.name='Foo'; I'm stuck in the sequel, I don't know how to filter, so far like this: cid= table_t.select(:curr_id). join(:table_c, :curr_id=>:curr_id). join(:table_b, :bic=>:bic). filter( ????? ) Answer with better idiom than above is appreciated as well.Tnx. UPDATE: I have to modify a little to make it works cid = DB[:table_t].select(:table_t__curr_id). join(:table_c, :curr_id=>:curr_id). join(:table_b, :bic=>:table_t__bic). #add table_t or else ERROR: column table_c.bic does not exist filter(:table_c__alpha_id => 'XXX', :table_b__name => 'Foo') without filter, cid = DB[:table_t].select(:table_t__curr_id). join(:table_c, :curr_id=>:curr_id, :alpha_id=>'XXX'). join(:table_b, :bic=>:table_t__bic, :name=>'Foo') btw I use pgsql 9.0

    Read the article

  • Oracle SQL: How to use more than 1000 items inside an IN statement

    - by Mehper C. Palavuzlar
    I have an SQL statement where I would like to get data of 1200 ep_codes by making use of IN. When I include more than 1000 ep_codes inside IN statement, Oracle says I'm not allowed to do that. To overcome this, I tried to change the SQL code as follows: SELECT ... FROM ... WHERE ... AND ep_codes IN (...1000 ep_codes...) OR ep_codes IN (...200 ep_codes...) The code was executed succesfully but the results are strange. Is it appropriate to do that using OR between INs or should I execute two separate codes as one with 1000 and the other with 200 ep_codes?

    Read the article

  • SQL Query in NHibernate diction

    - by Jan-Frederik Carl
    I have a SQL Query which works in SQL Management Studio: Select Id From table t Where t.Date= (Select Max(Date) From ( Select * From table where ReferenceId = xy) u) Reason is, from all entries with a certain foreign key, I want to receive the one with the highest date. I tried to reform this Query for use in NHibernate, and I got IQuery query = session.CreateQuery(String.Format( @"Select t.Id From table t Where t.Date = (Select Max(Date) From (Select * From table t where t.ReferenceItem.Id = " + item.ReferenceItem.Id + ")u)")); I get the error message: "In expected" How do I have to form the NHibernate query? What does the "In" mean?

    Read the article

  • How do I list all non-system stored procedures?

    - by bubbassauro
    I want to create a query to list of all user defined stored procedures, excluding the ones that are system stored procedures, considering that: Checking the name like "sp_" doesn't work because there are user stored procedures that start with "sp_". Checking the property is_ms_shipped doesn't work because there are system stored procedures that have that flag = 0, for example: sp_alterdiagram (it is not MSShipped but appears under System Stored Procedures in SQL Server Management Studio). There must be a property, or a flag somewhere since you can see the "System Stored Procedures" in a separate folder in SQL 2005. Does anyone know? Edit: A combination of the suggestions below worked for me: select * from sys.objects O LEFT OUTER JOIN sys.extended_properties E ON O.object_id = E.major_id WHERE O.name IS NOT NULL AND ISNULL(O.is_ms_shipped, 0) = 0 AND ISNULL(E.name, '') <> 'microsoft_database_tools_support' AND O.type_desc = 'SQL_STORED_PROCEDURE' ORDER BY O.name

    Read the article

  • How to optimize an SQL query with many thousands of WHERE clauses

    - by bugaboo
    I have a series of queries against a very mega large database, and I have hundreds-of-thousands of ORs in WHERE clauses. What is the best and easiest way to optimize such SQL queries? I found some articles about creating temporary tables and using joins, but I am unsure. I'm new to serious SQL, and have been cutting and pasting results from one into the next. SELECT doc_id, language, author, title FROM doc_text WHERE language='fr' OR language='es' SELECT doc_id, ref_id FROM doc_ref WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR ... SELECT ref_id, location_id FROM ref_master WHERE ref_id=098765 OR ref_id=987654 OR ref_id=876543 OR OR OR ... SELECT location_id, location_display_name FROM location SELECT doc_id, index_code, FROM doc_index WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR x100,000 These unoptimized query can take over 24 hours each. Cheers.

    Read the article

  • Valid Email Addresses - XSS and SQL Injection

    - by PAAMAYIM_NEKUDOTAYIM
    Since there are so many valid characters for email addresses, are there any valid email addresses that can in themselves be XSS attacks or SQL injections? I couldn't find any information on this on the web. The local-part of the e-mail address may use any of these ASCII characters: Uppercase and lowercase English letters (a–z, A–Z) Digits 0 to 9 Characters ! # $ % & ' * + - / = ? ^ _ ` { | } ~ Character . (dot, period, full stop) provided that it is not the last character, and provided also that it does not appear two or more times consecutively (e.g. [email protected]). http://en.wikipedia.org/wiki/E-mail_address#RFC_specification I'm not asking how to prevent these attacks (I'm already using parametrized queries and HTML purifier), this is more a proof-of-concept. The first thing that came to mind was 'OR [email protected], except that spaces are not allowed. Do all SQL injections require spaces?

    Read the article

  • Full-Text Search in SQL Server Express Won't Recognize Latest IFilters

    - by Brandon King
    I'm having difficulty getting full-text search working in SQL Server 2008 Express with Advanced Services. I have a table loaded with .DOCX files as varbinary(MAX) data that I want to use for a full-text catalog, but it doesn't seem to recognize the .DOCX format. Here are the steps that I've taken... Installed the latest Filter Pack 2.0 Exec sp_fulltext_service 'load_os_resources', 1 Exec sys.sp_help_fulltext_system_components 'all' (NOTE: .DOCX is not shown as a filter) Building the full-text catalog fails to identify any key words I initially thought there might be a conflict between x86 SQL Express and x64 Filter Pack on my Windows 7 machine, but I just tried it with everything x86 in a Windows XP virtual machine and got the same result.

    Read the article

  • Database schema publishing with SQL Server 2005/2008

    - by Marconline
    Hi everybody, I've a question for you. We have built a software that has a single database for each customer. These databases are managed by SQL Server 2008. Now the problem is that when we build our software we, sometimes, need to change something on the schema (like adding table, modifying existing ones etc) and migrate these updates on all the customers' databases. Now this task is accomplished by hand: we generate update scripts and then, using T-SQL, we update each database. This is ok for a small set of customer, but we are now becoming bigger and bigger and we really don't know how to face it. We found Wizardby and it seems interesting, but quite difficult for us to learn in this exact moment. Do you have any other trick? Thanks a lot, Marco

    Read the article

  • Best way to auto-restore a database every hour

    - by aron
    I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Red Gate Software has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that Red Gate Software is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • Custom SQL Server driver

    - by hoodoos
    I had a crazy thought about writing my own SQL Server driver to make it work something like non-blocking http client, so it won't be thread thirsty and could handle lots of db queries within one thread. I tried to look over google for some guidelines about implementing SQL Server client protocol, but found none really, where do those guys get information about it when they write own implementations for PHP or python? I need a really low level to be documented so I can implement all phases of working with a connection through sockets. And would be really nice to have a an example in c# language. :)

    Read the article

  • Help Forming An SQL Query That Selects The Max Difference Of Two Fields

    - by Frank
    I'm trying to select a record with the most effective votes. Each record has an id, the number of upvotes (int) and the number of downvotes (int) in a MySQL database. I know basic update, select, insert queries but I'm unsure of how to form a query that looks something like: SELECT * FROM topics WHERE MAX(topic.upvotes - topic.downvotes). Please excuse my made up SQL. The tutorials on SQL I find on the internet cover very basic stuff. Does anyone recommend a good book on this subject?

    Read the article

  • Linq To Sql Left outer join - filtering null results

    - by Harry
    I'd like to reproduce the following SQL into C# LinqToSql SELECT TOP(10) Keywords.* FROM Keywords LEFT OUTER JOIN IgnoreWords ON Keywords.WordID = IgnoreWords.ID WHERE (DomainID = 16673) AND (IgnoreWords.Name IS NULL) ORDER BY [Score] DESC The following C# Linq gives the right answer. But I can't help think I'm missing something (a better way of doing it?) var query = (from keyword in context.Keywords join ignore in context.IgnoreWords on keyword.WordID equals ignore.ID into ignored from i in ignored.DefaultIfEmpty() where i == null where keyword.DomainID == ID orderby keyword.Score descending select keyword).Take(10); the SQL produced looks something like this: SELECT TOP (10) [t0].[DomainID], [t0].[WordID], [t0].[Score], [t0].[Count] FROM [dbo].[Keywords] AS [t0] LEFT OUTER JOIN (SELECT 1 AS [test], [t1].[ID] FROM [dbo].[IgnoreWords] AS [t1]) AS [t2] ON [t0].[WordID] = [t2].[ID] WHERE ([t0].[DomainID] = 16673) AND ([t2].[test] IS NULL) ORDER BY [t0].[Score] DESC How can I get rid of this redundant inner selection? It's only slightly more expensive but every bit helps!

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • Loading .sql files from within PHP

    - by Josh Smeaton
    I'm creating an installation script for an application that I'm developing and need to create databases dynamically from within PHP. I've got it to create the database but now I need to load in several .sql files. I had planned to open the file and mysql_query it a line at a time - until I looked at the schema files and realised they aren't just one query per line. So, please.. how do I load an sql file from within PHP? (as phpMyAdmin does with it's import command).

    Read the article

  • Filter by virtual column?

    - by user329957
    I have the following database structure : [Order] OrderId Total [Payment] OrderId Amount Every Order can have X payment rows. I want to get only the list of orders where the sum of all the payments are < than the order Total. I have the following SQL but I will return all the orders paid and unpaid. SELECT o.OrderId, o.UserId, o.Total, o.DateCreated, COALESCE(SUM(p.Amount),0) AS Paid FROM [Order] o LEFT JOIN Payment p ON p.OrderId = o.OrderId GROUP BY o.OrderId, o.Total, o.UserId, o.DateCreated I have tried to add Where (Paid < o.Total) but it does not work, any idea? BTM I'm using SQL CE 3.5

    Read the article

  • Updating or inserting high scores in SQL

    - by Roger Gilbrat
    I've been racking my brain over this for the past few days and I'm not sure it's possible, but figured I ask here. Is it possible for a single SQL statement to update a high score if your score is greater or insert it if your first score? My Score table has a UserID, Level and Score columns and I like it to follow the following logic: If your new score is greater than your last score for this Level, then replace it. If you don't have a score for this Level then add it. If your score for this Level is less than your highest score for this Level then do nothing. Is this possible in a single SQL statement or do I have to use two, one to see if you have a new high score and if so, replace it? Each UserID would have only one score in the table for each Level. I'm using MySQL.

    Read the article

  • double authentication issue on IIS / Report Server (SQL server 2008)

    - by Vinzz
    Hi, On a 2003 server box, with SQL server 2008 installed (ReportServer deployed in IIS mode), I've got a virtual directory within IIS with it's security set to 'windows authentication', with the following html code: <body> <h1>test</h1> <iframe src="/reportserver" witdh="50%" height="50%" /> </body> From the outside, I've got a first login/pwd box displayed to access the html code, then a second one to display the content of the iframe. On the same type of server, but with SQL Server 2005, I don't have this issue (i.e. only one login box). My thought is that the first token should give acces to both the page and the iframe, isn't it? Any hints on how to setup the reportserver to fix this? thanks.

    Read the article

  • sql server replication algorithm.

    - by reggie
    Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed). I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates. I figure if sql server replication works by just checking the dates, then that should be good enough for me. Any wisdom here? thanks

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • SQL Server uncorrelated subquery very slow

    - by brianberns
    I have a simple, uncorrelated subquery that performs very poorly on SQL Server. I'm not very experienced at reading execution plans, but it looks like the inner query is being executed once for every row in the outer query, even though the results are the same each time. What can I do to tell SQL Server to execute the inner query only once? The query looks like this: select * from Record record0_ where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2' and ( record0_.EntityFK in ( select record1_.EntityFK from Record record1_ join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540' and (textvalues2_.Value like 'O%' escape '~') ) )

    Read the article

  • How to Import data from Excel 2010 to SQL table

    - by user2950101
    I am using this , Insert into smst(id,mobile,day,month,year,time,model,imie1,imie2,FullMessage)select * FROM OPENROWSET('Microsoft.Ace.OLEDB.14.0','Excel 14.0;Database=L:\SMS.xlsx;HDR=YES', 'SELECT id,mobile,day,month,year,time,model,imie1,imie2,FullMessage FROM [Sheet2]') Could you please help and find the error? sql error : 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '('Microsoft.Ace.OLEDB.14.0','Excel 14.0;Database=L:\SMS.xlsx;HDR=YES', 'SELECT i' at line 1 i am using excel 2010.

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >