Search Results

Search found 35019 results on 1401 pages for 'sql documentation'.

Page 683/1401 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Accessing both stored procedure output parameters AND the result set in Entity Framework?

    - by MS.
    Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model? I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set). Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler. I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches? Edit My code currently looks like public IEnumerable<FooBar> GetFooBars( int? param1, string param2, DateTime from, DateTime to, out DateTime? createdDate, out DateTime? deletedDate) { var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime)); var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime)); var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam); createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ? null : createdDateParam.Value); deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ? null : deletedDateParam.Value); return fooBars; }

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • Query to add a column depending of outcome of there columns

    - by Tam
    I have a user table 'users' that has fields like: id first_name last_name ... and have another table that determines relationships: user_id friend_id user_accepted friend_accepted .... I would like to generate a query that selects all the users but also add another field/column say 'network_status' that depends on the values of user_accepted and fiend_accepted. For example, if user_accepted is true friend_accepted is false I want the 'network_status' field to say 'request sent'. Can I possibly do this in one query? (I would prefer not to user if/else inside the query but if that's the only way so be it)

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • Subsonic SELECT FROM msdb

    - by Lukasz Lysik
    Hi, I want to execute the following query using Subsonic: SELECT MAX([restore_date]) FROM [msdb].[dbo].[restorehistory] While the aggregate part is easy for me, the problem is with the name of the table. How should I force Subsonic to select from different database than default one.

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

  • getting complete sql query in jython

    - by kdev
    result=sqlstring.executeQuery("select distinct table_name,owner from all_tables ") rs.append(str(i)+' , '+result.getString("table_name")+' , '+result.getString("owner")) If i want to display the query select * from all_tables or ' select count(*) from all_tables' how can i get the output to display . Please suggest thanks

    Read the article

  • Do I need to drop index on temp table?

    - by Phil
    Hi, Fairly simple question, but I don't see it anywhere else on SO: Do indexes (indices?) on a temporary table get automatically deleted with the temporary table? I'd imagine they do but I don't really know how to check to make sure. Thanks, Phil

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • Extract time part from TimeStamp column in ORACLE

    - by RRUZ
    Actually i' am using MyTimeStampField-TRUNC(MyTimeStampField) to extract the time part from an timestamp column in Oracle. SELECT CURRENT_TIMESTAMP-TRUNC(CURRENT_TIMESTAMP) FROM DUAL this return +00 13:12:07.100729 this work ok for me, to extract the time part from an timestamp field, but i' m wondering if exist a better way (may be using an built-in function of ORACLE) to do this? Thanks.

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • PHP MSSQL : How to display output when query return no row

    - by vamps
    i have a problem with my PHP-MSSQL query. i have a join table that need to give a result something be like this: Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 the query will count how many employee as per given date (admin will enter data and once submitted, the query will give the above result). The problem is, the final output is a mess when there's no row to be displayed. the column is shifted to the right. i.e: only Group A in IT only Group B in Admin Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 my question is, how to prevent this to happen? i've tried everything with While.... if else.. but the result is still the same. how to display output "0" if no rows to return? echo "0"; this is my QUERY: select DD.DPT_ID,DPT.DEPARTMENT_NAME,TU.EMP_GROUP, sum(DD.WORK_HOUR) AS WORK_HOUR, sum(DD.OT_HOUR) AS OT_HOUR FROM DEPARTMENT_DETAIL DD left join DEPARTMENT DPT ON (DD.DEPT_ID=DPT.DEPT_ID) LEFT JOIN TBL_USERS TU ON (TU.EMP_ID=DD.EMP_ID) WHERE DD_DATE>='2012-01-01' AND DD_DATE<='2012-01-31' AND TU.EMP_GROUP!=2 GROUP BY DD.DEPT_ID, DPT.DEPARTMENT_NAME,TU.EMP_GROUP ORDER BY DPT.DEPARTMENT_NAME this is one of the logic that i've used, but doesn't return the result that i want:: while($row = mssql_fetch_array($displayResult)) { if ((!$row["WORK_HOUR"])&&(!$row["OT_HOUR"])) { echo "<td >"; echo "empty"; echo "&nbsp;</td>"; echo "<td >"; echo "empty"; echo "&nbsp;</td>"; } else { echo "<td>"; echo $row["WORK_HOUR"]; echo "&nbsp;</td>"; echo "<td>"; echo $row["OT_HOUR"]; echo "&nbsp;</td>"; } } please help. i've been doing this for 2 days. @__@

    Read the article

  • how do I integrate the aspnet_users table (asp.net membership) into my existing database

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • I want to do a sql update loop statement, by using the do--while in php

    - by Jean
    Hello, I want to loop the update statement, but it only loops once. Here is the code I am using: do { mysql_select_db($database_ll, $ll); $query_query= "update table set ex='$71[1]' where field='val'"; $query = mysql_query($query_query, $ll) or die(mysql_error()); $row_domain_all = mysql_fetch_assoc($query); } while ($row_query = mysql_fetch_assoc($query)); Thanks Jean

    Read the article

  • Select rows where column LIKE dictionary word

    - by Gerve
    I have 2 tables: Dictionary - Contains roughly 36,000 words CREATE TABLE IF NOT EXISTS `dictionary` ( `word` varchar(255) NOT NULL, PRIMARY KEY (`word`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Datas - Contains roughly 100,000 rows CREATE TABLE IF NOT EXISTS `datas` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `hash` varchar(32) NOT NULL, `data` varchar(255) NOT NULL, `length` int(11) NOT NULL, `time` int(11) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `hash` (`hash`), KEY `data` (`data`), KEY `length` (`length`), KEY `time` (`time`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=105316 ; I would like to somehow select all the rows from datas where the column data contains 1 or more words. I understand this is a big ask, it would need to match all of these rows together in every combination possible, so it needs the best optimization. I have tried the below query, but it just hangs for ages: SELECT `datas`.*, `dictionary`.`word` FROM `datas`, `dictionary` WHERE `datas`.`data` LIKE CONCAT('%', `dictionary`.`word`, '%') AND LENGTH(`dictionary`.`word`) > 3 ORDER BY `length` ASC LIMIT 15 I have also tried something similar to the above with a left join, and on clause that specified the like statement.

    Read the article

  • how can add an extra select in this query?

    - by BulgedSnowy
    i've three tables related. images: id | filename | filesize | ... nodes: image_id | tag_id tags: id | name And i'm using this query to search images containing x tags SELECT images.* FROM images INNER JOIN nodes ON images.id = nodes.image_id WHERE tag_id IN (SELECT tags.id FROM tags WHERE tags.tag IN ("tag1","tag2")) GROUP BY images.id HAVING COUNT(*)= 2 The problem is that i need to retrieve also all images contained by the retrieved image, and i need this in the same query. This the actual query wich search retrieve all tags contained by the image: SELECT tag FROM nodes JOIN tags ON nodes.tag_id = tags.id WHERE image_id = images.id and nodes.private = images.private ORDER BY tag How can i mix this two to have only one query?

    Read the article

  • how to do multi insert and obtain ids

    - by liysd
    hi, I want to insert some data into a table (id PK autoincrement, val) with use multi insert INSERT INTO tab (val) VALUES (1), (2), (3) Is it possible to obtain a table of last inserted ids? I'm asking becouse I'm not sure if all will in this form: (n, n+1, n+2). I use mysql inodb.

    Read the article

  • How can I use an array within a SQL query

    - by ThinkingInBits
    So I'm trying to take a search string (could be any number of words) and turn each value into a list to use in the following IN statement) in addition, I need a count of all these values to use with my having count filter $search_array = explode(" ",$this->search_string); $tag_count = count($search_array); $db = Connect::connect(); $query = "select p.id from photographs p left join photograph_tags c on p.id = c.photograph_id and c.value IN ($search_array) group by p.id having count(c.value) >= $tag_count"; This currently returns no results, any ideas?

    Read the article

  • mysql: inserting data and autoincrement

    - by every_answer_gets_a_point
    i am converting from access to mysql i have a table in access where one of the columns is an autonumber when i transfer the data into the mysql database (where i also have a column that is auto_increment), should i be transfering the auto_increment data into the auto_increment column, or will it auto_increment itself? how do i ensure that if i do not transfer the autoincrement data from access, that it auto_increments properly?

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >