Search Results

Search found 43145 results on 1726 pages for 'sql select'.

Page 274/1726 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Valid Email Addresses - XSS and SQL Injection

    - by PAAMAYIM_NEKUDOTAYIM
    Since there are so many valid characters for email addresses, are there any valid email addresses that can in themselves be XSS attacks or SQL injections? I couldn't find any information on this on the web. The local-part of the e-mail address may use any of these ASCII characters: Uppercase and lowercase English letters (a–z, A–Z) Digits 0 to 9 Characters ! # $ % & ' * + - / = ? ^ _ ` { | } ~ Character . (dot, period, full stop) provided that it is not the last character, and provided also that it does not appear two or more times consecutively (e.g. [email protected]). http://en.wikipedia.org/wiki/E-mail_address#RFC_specification I'm not asking how to prevent these attacks (I'm already using parametrized queries and HTML purifier), this is more a proof-of-concept. The first thing that came to mind was 'OR [email protected], except that spaces are not allowed. Do all SQL injections require spaces?

    Read the article

  • Full-Text Search in SQL Server Express Won't Recognize Latest IFilters

    - by Brandon King
    I'm having difficulty getting full-text search working in SQL Server 2008 Express with Advanced Services. I have a table loaded with .DOCX files as varbinary(MAX) data that I want to use for a full-text catalog, but it doesn't seem to recognize the .DOCX format. Here are the steps that I've taken... Installed the latest Filter Pack 2.0 Exec sp_fulltext_service 'load_os_resources', 1 Exec sys.sp_help_fulltext_system_components 'all' (NOTE: .DOCX is not shown as a filter) Building the full-text catalog fails to identify any key words I initially thought there might be a conflict between x86 SQL Express and x64 Filter Pack on my Windows 7 machine, but I just tried it with everything x86 in a Windows XP virtual machine and got the same result.

    Read the article

  • Updating or inserting high scores in SQL

    - by Roger Gilbrat
    I've been racking my brain over this for the past few days and I'm not sure it's possible, but figured I ask here. Is it possible for a single SQL statement to update a high score if your score is greater or insert it if your first score? My Score table has a UserID, Level and Score columns and I like it to follow the following logic: If your new score is greater than your last score for this Level, then replace it. If you don't have a score for this Level then add it. If your score for this Level is less than your highest score for this Level then do nothing. Is this possible in a single SQL statement or do I have to use two, one to see if you have a new high score and if so, replace it? Each UserID would have only one score in the table for each Level. I'm using MySQL.

    Read the article

  • Loading .sql files from within PHP

    - by Josh Smeaton
    I'm creating an installation script for an application that I'm developing and need to create databases dynamically from within PHP. I've got it to create the database but now I need to load in several .sql files. I had planned to open the file and mysql_query it a line at a time - until I looked at the schema files and realised they aren't just one query per line. So, please.. how do I load an sql file from within PHP? (as phpMyAdmin does with it's import command).

    Read the article

  • Filter by virtual column?

    - by user329957
    I have the following database structure : [Order] OrderId Total [Payment] OrderId Amount Every Order can have X payment rows. I want to get only the list of orders where the sum of all the payments are < than the order Total. I have the following SQL but I will return all the orders paid and unpaid. SELECT o.OrderId, o.UserId, o.Total, o.DateCreated, COALESCE(SUM(p.Amount),0) AS Paid FROM [Order] o LEFT JOIN Payment p ON p.OrderId = o.OrderId GROUP BY o.OrderId, o.Total, o.UserId, o.DateCreated I have tried to add Where (Paid < o.Total) but it does not work, any idea? BTM I'm using SQL CE 3.5

    Read the article

  • Custom SQL Server driver

    - by hoodoos
    I had a crazy thought about writing my own SQL Server driver to make it work something like non-blocking http client, so it won't be thread thirsty and could handle lots of db queries within one thread. I tried to look over google for some guidelines about implementing SQL Server client protocol, but found none really, where do those guys get information about it when they write own implementations for PHP or python? I need a really low level to be documented so I can implement all phases of working with a connection through sockets. And would be really nice to have a an example in c# language. :)

    Read the article

  • Nesting queries in SQL

    - by ZAX
    The goal of my query is to return the country name and its head of state if it's headofstate has a name starting with A, and the capital of the country has greater than 100,000 people utilizing a nested query. Here is my query: SELECT country.name as country, (SELECT country.headofstate from country where country.headofstate like 'A%') from country, city where city.population > 100000; I've tried reversing it, placing it in the where clause etc. I don't get nested queries. I'm just getting errors back, like subquery returns more than one row and such. If someone could help me out with how to order it, and explain why it needs to be a certain way, that'd be great.

    Read the article

  • How do I list all non-system stored procedures?

    - by bubbassauro
    I want to create a query to list of all user defined stored procedures, excluding the ones that are system stored procedures, considering that: Checking the name like "sp_" doesn't work because there are user stored procedures that start with "sp_". Checking the property is_ms_shipped doesn't work because there are system stored procedures that have that flag = 0, for example: sp_alterdiagram (it is not MSShipped but appears under System Stored Procedures in SQL Server Management Studio). There must be a property, or a flag somewhere since you can see the "System Stored Procedures" in a separate folder in SQL 2005. Does anyone know? Edit: A combination of the suggestions below worked for me: select * from sys.objects O LEFT OUTER JOIN sys.extended_properties E ON O.object_id = E.major_id WHERE O.name IS NOT NULL AND ISNULL(O.is_ms_shipped, 0) = 0 AND ISNULL(E.name, '') <> 'microsoft_database_tools_support' AND O.type_desc = 'SQL_STORED_PROCEDURE' ORDER BY O.name

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • How to update the following rows after the sum of the previous rows reach a threshold? MySQL

    - by Paulo Faria
    I want to update the following rows after the sum of the previous rows reach a defined threshold. I'm using MySQL, and trying to think of a way to solve this using SQL only. Here's an example. Having the threshold 100. Iterating through the rows, when the sum of the previous rows amount = 100, set the following rows to checked. Before the operation: | id | amount | checked | | 1 | 50 | false | | 2 | 50 | false | | 3 | 20 | false | | 4 | 30 | false | After the operation: | id | amount | checked | | 1 | 50 | false | | 2 | 50 | false | <- threshold reached (50 + 50 = 100) | 3 | 20 | true* | | 4 | 30 | true* | Is it possible to do it with just a SQL query? Do I need a stored procedure? How could I implement it using either solution?

    Read the article

  • SQL query duration is longer for smaller dataset?

    - by entens
    I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results. Quick query (averages 4 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC Long query (averages 1 minute 20 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should. Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • pl/sql does not work with %rowtype

    - by Manolo
    I want to do a simple PL/SQL program on the Oracle 10g internet environment. The program is: DECLARE stud_rec students%ROWTYPE; last_name VARCHAR2:='Clinton'; BEGIN SELECT * INTO stud_rec FROM students WHERE student_id=100; END; I have a table called students with data inside of it. The issue is that when I want to run this in the SQL command window I got this message: ORA-06550: line 3, column 11: PLS-00215: String length constraints must be in range (1 .. 32767) I have checked the syntax and I cannot find the error. Any help? Thanks in advance

    Read the article

  • sql server replication algorithm.

    - by reggie
    Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed). I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates. I figure if sql server replication works by just checking the dates, then that should be good enough for me. Any wisdom here? thanks

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • SQL statement HAVING MAX(some+thing)=some+thing

    - by Andreas
    I'm having trouble with Microsoft Access 2003, it's complaining about this statement: select cardnr from change where year(date)<2009 group by cardnr having max(time+date) = (time+date) and cardto='VIP' What I want to do is, for every distinct cardnr in the table change, to find the row with the latest (time+date) that is before year 2009, and then just select the rows with cardto='VIP'. This validator says it's OK, Access says it's not OK. This is the message I get: "you tried to execute a query that does not include the specified expression 'max(time+date)=time+date and cardto='VIP' and cardnr=' as part of an aggregate function." Could someone please explain what I'm doing wrong and the right way to do it? Thanks

    Read the article

  • Can I do this with just SQL?

    - by Josh
    At the moment I have two tables, products and options. Products contains id title description Options contains id product_id sku title Sample data may be: Products id: 1 title: 'test' description: 'my description' Options id: 1 product_id: 1 sku: 1001 title: 'red' id: 2 product_id: 1 sku: 1002 title: 'blue' I need to display each item, with each different option. At the moment, I select the rows in products and iterate through them, and for each one select the appropriate rows from options. I then create an array, similar to: [product_title] = 'test'; [description] = 'my description'; [options][] = 1, 1001, 'red'; [options][] = 2, 1002, 'blue'; Is there a better way to do this with just sql (I'm using codeigniter, and would ideally like to use the Active Record class)?

    Read the article

  • Passing variables from SQL proceedure to PHP

    - by Sam Corbet
    I am trying to create an sql proceedure that will return the results back to the php page. I want to be able to call the procedure as follows from the php call procedure_name($var1) which will run this script: -- --------------------------------------------------------------------------------- -- pUIGetCliStmtGenFlag -- -- This procedure returns the status of the Trading Period: -- -- --------------------------------------------------------------------------------- drop procedure if exists pUIGetCliStmtGenFlag; delimiter // create procedure pUIGetCliStmtGenFlag( IN pTradingPeriodMonth DATE ) MODIFIES SQL DATA COMMENT 'Checks if the TP has been closed' begin SELECT trading_period_month, dt_end, amt_traded_system_ccy FROM ca_trading_period WHERE trading_period_month=$var1 -- If amt_traded_system_ccy is NULL give the TP an open status otherwise mark as closed IF amt_traded_system_ccy is NULL $tpstatus='open' ELSE $tpstatus='closed' end; // delimiter ; I then want to be able to use $tpstatus in the rest of the php script. I know this is simple but this is completely new to me and I cant find the correct method

    Read the article

  • Split table and insert with identity link

    - by The King
    Hi.. I have 3 tables similar to the sctructure below CREATE TABLE [dbo].[EmpBasic]( [EmpID] [int] IDENTITY(1,1) NOT NULL Primary Key, [Name] [varchar](50), [Address] [varchar](50) ) CREATE TABLE [dbo].[EmpProject]( [EmpID] [int] NOT NULL primary key, // referencing column with EmpBasic [EmpProject] [varchar](50) ) CREATE TABLE [dbo].[EmpFull_Temp]( [ObjectID] [int] IDENTITY(1,1) NOT NULL Primary Key, [T1Name] [varchar](50) , [T1Address] [varchar](50) , [T1EmpProject] [varchar](50) ) The EmpFull_Temp table has the records with a dummy object ID column... I want to populate the first 2 tables with the records in this table... But with EmpID as a reference between the first 2 tables. I tried this in a stored procedure... Create Table #IDSS (EmpID bigint, objID bigint) Insert into EmpBasic output Inserted.EmpID, EmpFull_Temp.ObjectID into #IDSS Select T1Name, T1Address from EmpFull_Temp Where ObjectID < 106 Insert into EmpProject Select A.EmpID, B.T1EmpProject from #IDSS as A, EmpFull_Temp as B Where A.ObjID = B.ObjectID But it says.. The multi-part identifier "EmpFull_Temp.ObjectID" could not be bound. Could you please help me in achieving this...

    Read the article

  • double authentication issue on IIS / Report Server (SQL server 2008)

    - by Vinzz
    Hi, On a 2003 server box, with SQL server 2008 installed (ReportServer deployed in IIS mode), I've got a virtual directory within IIS with it's security set to 'windows authentication', with the following html code: <body> <h1>test</h1> <iframe src="/reportserver" witdh="50%" height="50%" /> </body> From the outside, I've got a first login/pwd box displayed to access the html code, then a second one to display the content of the iframe. On the same type of server, but with SQL Server 2005, I don't have this issue (i.e. only one login box). My thought is that the first token should give acces to both the page and the iframe, isn't it? Any hints on how to setup the reportserver to fix this? thanks.

    Read the article

  • getting sql records

    - by droidus
    when i run this code, it returns the topic fine... $query = mysql_query("SELECT topic FROM question WHERE id = '$id'"); if(mysql_num_rows($query) > 0) { $row = mysql_fetch_array($query) or die(mysql_error()); $topic = $row['topic']; } but when I change it to this, it doesn't run at all. why is this happening? $query = mysql_query("SELECT topic, lock FROM question WHERE id = '$id'"); if(mysql_num_rows($query) > 0) { $row = mysql_fetch_array($query) or die(mysql_error()); $topic = $row['topic']; $lockedThread = $row['lock']; echo "here: " . $lockedThread; }

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Check if a SQL table exists.

    - by Carra
    What's the best way to check if a table exists in a Sql database in a database independant way? I came up with: bool exists; const string sqlStatement = @"SELECT COUNT(*) FROM my_table"; try { using (OdbcCommand cmd = new OdbcCommand(sqlStatement, myOdbcConnection)) { cmd.ExecuteScalar(); exists = true; } } catch (Exception ex) { exists = false; } Is there a better way to do this? This method will not work when the connection to the database fails. I've found ways for Sybase, SQL server, Oracle but nothing that works for all databases.

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >