Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 672/1132 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • Stored Procedure: Reducing Table Data

    - by SumGuy
    Hi Guys, A simple question about Stored Procedures. I have one stored procedure collecting a whole bunch of data in a table. I then call this procedure from within another stored procedure. I can copy the data into a new table created in the calling procedure but as far as I can see the tables have to be identical. Is this right? Or is there a way to insert only the data I want? For example.... I have one procedure which returns this: SELECT @batch as Batch, @Count as Qty, pd.Location, cast(pd.GL as decimal(10,3)) as [Length], cast(pd.GW as decimal(10,3)) as Width, cast(pd.GT as decimal(10,3)) as Thickness FROM propertydata pd GROUP BY pd.Location, pd.GL, pd.GW, pd.GT I then call this procedure but only want the following data: DECLARE @BatchTable TABLE ( Batch varchar(50), [Length] decimal(10,3), Width decimal(10,3), Thickness decimal(10,3), ) INSERT @BatchTable (Batch, [Length], Width, Thickness) EXEC dbo.batch_drawings_NEW @batch So in the second command I don't want the Qty and Location values. However the code above keeps returning the error: "Insert Error: Column name or number of supplied values does not match table"

    Read the article

  • HQl equivalent of sql query

    - by kash
    String SQL_QUERY = "SELECT count(*) FROM (SELECT * FROM Url as U where U.pageType=" + 1 + " group by U.pageId having count(U.pageId) = 1)"; query = session.createQuery(SQL_QUERY); I am getting an error org.hibernate.hql.ast.QuerySyntaxException: unexpected token: ( near line 1, column 23 [ SELECT count() FROM (SELECT * FROM Url as U where U.pageType = 2 group by U.pageId having count(U.pageId) = 1)]

    Read the article

  • Should I use DATE and TIME as fields rather than DATETIME

    - by whitstone86
    I have a project on going for a TV guide, called mytvguide in the database - I use PHPMyAdmin. This is the structure for one table, which is called tvshow1: Field Type channel varchar(255) date date No airdate time No expiration time No episode varchar(255) setreminder varchar(255) but am not sure how to get DATE, TIME to work with the pagination script (below is the script, which works for the version with DATETIME): http://pastebin.com/6S1ejAFJ However, although the DATETIME one works - it shows programmes that air on the day itself like this: Programme 1 showing on Channel 1 2:35pm "Episode 2" Set Reminder Programme 1 showing on Channel 1 May 26th - 12:50pm "Episode 3" Set Reminder Programme 1 showing on Channel 1 May 26th - 5:55pm "Episode 3" Set Reminder but I'm not quite sure how to replicate that for the fields that use DATE, TIME functions as seen above. Any advice on this is appreciated, thanks!

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • Updating database row from model

    - by Jamie Dixon
    Hey everyone, I'm haing a few problems updating a row in my database using Linq2Sql. Inside of my model I have two methods for updating and saving from my controller, which in turn receives an updated model from my view. My model methods like like: public void Update(Activity activity) { _db.Activities.InsertOnSubmit(activity); } public void Save() { _db.SubmitChanges(); } and the code in my Controller likes like: [HttpPost] public ActionResult Edit(Activity activity) { if (ModelState.IsValid) { UpdateModel<Activity>(activity); _activitiesModel.Update(activity); _activitiesModel.Save(); } return View(activity); } The problem I'm having is that this code inserts a new entry into the database, even though the model item i'm inserting-on-submit contains a primary key field. I've also tried re-attaching the model object back to the data source but this throws an error because the item already exists. Any pointers in the right direction will be greatly appreciated. UPDATE: I'm using dependancy injection to instantiate my datacontext object as follows: IMyDataContext _db; public ActivitiesModel(IMyDataContext db) { _db = db; }

    Read the article

  • How to return a message from my repository class to my controller and then to my view in asp.net-mvc

    - by Pandiya Chendur
    I use this for checking an existing emailId in my table and inserting it...It works fine how to show message to user when he tries to register with an existing mailId.... if (!taxidb.Registrations.Where(u => u.EmailId == reg.EmailId).Any()) { taxidb.Registrations.InsertOnSubmit(reg); taxidb.SubmitChanges(); } and my controller has this, RegistrationBO reg = new RegistrationBO(); reg.UserName = collection["UserName"]; reg.OrgName = collection["OrgName"]; reg.Address = collection["Address"]; reg.EmailId = collection["EmailId"]; reg.Password = collection["Password"]; reg.CreatedDate = System.DateTime.Now; reg.IsDeleted = Convert.ToByte(0); regrep.registerUser(reg); Any sugesstion how to show "EmailID" already exists to the user with asp.net mvc...

    Read the article

  • Run SSIS Package from T-SQL

    - by Dr. Zim
    I noticed you can use the following stored procedures (in order) to schedule a SSIS package: msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]' msdb.dbo.sp_add_job ... msdb.dbo.sp_add_jobstep ... msdb.dbo.sp_update_job ... msdb.dbo.sp_add_jobschedule ... msdb.dbo.sp_add_jobserver ... (You can see an example by right clicking a scheduled job and selecting "Script Job as- Create To".) AND you can use sp_start_job to execute the job immediately, effectively running SSIS packages on demand. Question: does anyone know of any msdb.dbo.[...] stored procedures that simply allow you to run SSIS packages on the fly without using sp_cmdshell directly, or some easier approach?

    Read the article

  • Any way to optimize this MySQL query?

    - by manyxcxi
    My table looks like this: `MyDB`.`Details` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL ); I have the following SELECT statement in a stored procedure SELECT RULE ,TITLE ,SUM(IF(t.PASSED='Y',1,0)) AS PASS ,SUM(IF(t.PASSED='N',1,0)) AS FAIL FROM ( SELECT a.line_order ,MAX(CASE WHEN a.element_name = 'PASSED' THEN a.`value` END) AS PASSED ,MAX(CASE WHEN a.element_name = 'RULE' THEN a.`value` END) AS RULE ,MAX(CASE WHEN a.element_name = 'TITLE' THEN a.`value` END) AS TITLE FROM Details a WHERE run_id = runId GROUP BY line_order ) t GROUP BY RULE, TITLE; *runId is an input parameter to the stored procedure. This query takes about 14 seconds to run. The table has 214856 rows, and the particular run_id I am filtering on has 162204 records. It's not on a super high power machine, but I feel like I could be doing this more efficiently. My main goal is to summarize by Rule and Title and show Pass and Fail count columns.

    Read the article

  • Service Broker not working after database restore

    - by roryok
    Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box. Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON SELECT * FROM sys.service_queues gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc. Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out. Any ideas? I'm sure there's something really obvious I've missed...

    Read the article

  • Select 2 Rows from Table when COUNT of another table

    - by Marcus
    Here is the code that I currently have: SELECT `A`.* FROM `A` LEFT JOIN `B` ON `A`.`A_id` = `B`.`value_1` WHERE `B`.`value_2` IS NULL AND `B`.`userid` IS NULL ORDER BY RAND() LIMIT 2 What it currently is supposed to do is select 2 rows from A when the 2 rows A_id being selected are not in value_1 or value_2 in B. And the rows in B are specific to individual users with userid. What I need to do is make it also so that also checks if there are already N rows in B matching a A_id (either in value_1, or value_2) and userid, and if there are more than N rows, it doesn't select the A row.

    Read the article

  • Passing a outside variable into a <asp:sqldatasource> tag. ASP.NET 2.0

    - by MadMAxJr
    I'm designing some VB based ASP.NET 2.0, and I am trying to make more use of the various ASP tags that visual studio provides, rather than hand writing everything in the code-behind. I want to pass in an outside variable from the Session to identify who the user is for the query. <asp:sqldatasource id="DataStores" runat="server" connectionstring="<%$ ConnectionStrings:MY_CONNECTION %>" providername="<%$ ConnectionStrings:MY_CONNECTION.ProviderName %>" selectcommand="SELECT THING1, THING2 FROM DATA_TABLE WHERE (THING2 IN (SELECT THING2 FROM RELATED_DATA_TABLE WHERE (USERNAME = @user)))" onselecting="Data_Stores_Selecting"> <SelectParameters> <asp:parameter name="user" defaultvalue ="" /> </SelectParameters> </asp:sqldatasource> And on my code behind I have: Protected Sub Data_Stores_Selecting(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.SqlDataSourceSelectingEventArgs) Handles Data_Stores.Selecting e.Command.Parameters("user").Value = Session("userid") End Sub Oracle squaks at me with ORA-01036, illegal variable name. Am I declaring the variable wrong in the query? I thought external variables share the same name with a @ prefixed. from what I understand, this should be placing the value I want into the query when it executes the select. EDIT: Okay, thanks for the advice so far, first error was corrected, I need to use : and not @ for the variable declaration in the query. Now it generates an ORA-01745 invalid host/bind variable name. EDIT AGAIN: Okay, looks like user was a reserved word. It works now! Thanks for other points of view on this one. I hadn't thought of that approach.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • PL/SQL How return all attributes in ROW

    - by kunkanwan
    Hi I don't know how can I return all attributes by RETURNING I want something like that: DECLARE v_user USER%ROWTYPE BEGIN INSERT INTO User VALUES (1,'Bill','QWERTY') RETURNING * INTO v_user; END; RETURNING * INTO gets error , how can I replace * ? Have you any idea ? Thanks for your time ;)

    Read the article

  • Removal of table primary key in MySQL

    - by marionmaiden
    Hello, I've removed the primary key of one table of my MySQL database, but now, when I use the MySQL Administrator and try to edit some data of this table, it doesn't allow me to do this. The button edit that appears in the bottom of the table keeps visible, but disabled to click.

    Read the article

  • How to write my own global lock / unlock functions for PostgreSQL

    - by rafalmag
    I have postgresql (in perlu) function getTravelTime(integer, timestamp), which tries to select data for specified ID and timestamp. If there are no data or if data is old, it downloads them from external server (downloading time ~300ms). Multiple process use this database and this function. There is an error when two process do not find data and download them and try to do an insert to travel_time table (id and timestamp pair have to be unique). I thought about locks. Locking whole table would block all processes and allow only one to proceed. I need to lock only on id and timestamp. pg_advisory_lock seems to lock only in "current session". But my processes uses their own sessions. I tried to write my own lock/unlock functions. Am I doing it right? I use active waiting, how can I omit this? Maybe there is a way to use pg_advisory_lock() as global lock? My code: CREATE TABLE travel_time_locks ( id_key integer NOT NULL, time_key timestamp without time zone NOT NULL, UNIQUE (id_key, time_key) ); ------------ -- Function: mylock(integer, timestamp) DROP FUNCTION IF EXISTS mylock(integer, timestamp) CASCADE; -- Usage: SELECT mylock(1, '2010-03-28T19:45'); -- function tries to do a global lock similar to pg_advisory_lock(key, key) CREATE OR REPLACE FUNCTION mylock(id_input integer, time_input timestamp) RETURNS void AS $BODY$ DECLARE rows int; BEGIN LOOP BEGIN -- active waiting here !!!! :( INSERT INTO travel_time_locks (id_key, time_key) VALUES (id_input, time_input); EXCEPTION WHEN unique_violation THEN CONTINUE; END; EXIT; END LOOP; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1; ------------ -- Function: myunlock(integer, timestamp) DROP FUNCTION IF EXISTS myunlock(integer, timestamp) CASCADE; -- Usage: SELECT myunlock(1, '2010-03-28T19:45'); -- function tries to do a global unlock similar to pg_advisory_unlock(key, key) CREATE OR REPLACE FUNCTION myunlock(id_input integer, time_input timestamp) RETURNS integer AS $BODY$ DECLARE BEGIN DELETE FROM ONLY travel_time_locks WHERE id_key=id_input AND time_key=time_input; RETURN 1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1;

    Read the article

  • Filling a LOV in Oracle Apex based on data in another text box

    - by Martin Pugh
    I am fairly new to Oracle Apex, and have a problem. Our application currently has a method of entering data, with several text boxes and Optional List of Values. I would like to have an LOV based on information in another text box like so: select APPOINTMENT_ID PATIENT_ID from APPOINTMENT where PATIENT_ID = :P9_PAT_NUM where P9_PAT_NUM is a patient number in a text box. However, this would apparently only work if the text box has already been submitted, else it assumes the text box is null. Is there any way to get this working with an LOV, or perhaps some other method?

    Read the article

  • Strange use of the index in Mysql

    - by user309067
    explain SELECT feed_objects.* FROM feed_objects WHERE (feed_objects.feed_id IN (165,160,159,158,157,153,152,151,150,149,148,147,129,128,127,126,125,124,122,121,120,119,118,117,116,115,114,113,111,110)) ; +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | feed_objects | ALL | by_feed_id | NULL | NULL | NULL | 188 | Using where | +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ Not used index 'by_feed_id' But when I point less than the values in the "WHERE" - everything is working right explain SELECT feed_objects.* FROM feed_objects WHERE (feed_objects.feed_id IN (165,160,159,158,157,153,152,151,150,149,148,147,129,128,127,125,124)) ; +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ | 1 | SIMPLE | feed_objects | range | by_feed_id | by_feed_id | 9 | NULL | 18 | Using where | +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ Used index 'by_feed_id' What is the problem?

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • Check for active connection in NHibernate

    - by Dofs
    I have a system with a few different databases, and I would like to check if a certain database is down, and if so display a message to the user. Is it possible in NHibernate to check if there is an active connection to the database, without having to request data and then catch the exception?

    Read the article

  • Export MySQL Data as Insert Statements

    - by gav
    Hi All, I'm working in Ubuntu with MySql and I also have Query Browser and Administrator installed, I'm not afraid of the command line either if it helps. I want simply to be able to run a query and see a result set but then convert that result set into a series of commands that could be used to create the same rows in a table of an identical schema. I hope the question makes sense, it's quite a simple problem and one that must have been solved but I can't for the life of me work out where this kind of conversion is made available. Thanks in advance, Gav

    Read the article

  • sql boolean truth test: zero OR null

    - by AK
    Is there way to test for both 0 and NULL with one equality operator? I realize I could do this: WHERE field = 0 OR field IS NULL But my life would be a hundred times easier if this would work: WHERE field IN (0, NULL) (btw, why doesn't that work?) I've also read about converting NULL to 0 in the SELECT statement (with COALESCE). The framework I'm using would also make this unpleasant. Realize this is oddly specific, but is there any way to test for 0 and NULL with one WHERE predicate?

    Read the article

  • how to save html to a database field

    - by ooo
    i have an tiny editor web page where my users can use this editor and i am saving the html into my database. i am having issues saving this html to my database. for example if there is a name with a "'" or if there are other html character "<,","" etc, my code seems to blow up on the insert. Is there any best practices about taking any arbitrary html and have it persist fully to a db field without worrying about any specific characters.

    Read the article

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >