Search Results

Search found 38336 results on 1534 pages for 'sql wait types'.

Page 664/1534 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • Insert multiple records from a XML string differing on one parameter in SQL SERVER 2008

    - by Rohit
    Below in a query which inserts records to SimpleDictationProfileMapping table after reading it from a XML string. Now this query inserts a single record in which DictationCaptureProfileID is @dictationCaptureProfileId . Now i want to insert multiple rows in which @dictationCaptureProfileId is different and other 2 values are same. What i want to achieve by this is in case parent changes all child values should also change. INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID, DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId, row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes('/simpleMappingAtribute/attribute') AS d ( row ) ; I want INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID OR (SELECT DictationCaptureProfileID FROM DictationCaptureProfile WHERE SystemDictationCaptureProfileID = @systemDictationCaptureProfileID), DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId , row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes ('/simpleMappingAtribute/attribute') AS d ( row ) ; Please tell how to achieve this.

    Read the article

  • How Can I Generate Random Unqiue Numbers in C#

    - by peace
    public int GenPurchaseOrderNum() { Random random = new Random(); _uniqueNum = random.Next(13287, 21439); return UniqueNum; } I removed unique constraint from the PONumber column in the db because an employee should only generate P.O. # when the deal is set. Otherwise, P.O. # would have 0. P.O. Number used to have unique constraint, this forces employee to generate P.O. in all cases so the db doesn't throw unique constraint error. Since i removed the unique constraint, any quote doesn't have P.O. will carry 0 value. Otherwise, a unique value is generated for P.O. #. However, i don't have a unique constraint in db which makes it hard for me to know whether the application generated P.O. # is unique or not. What should i do? I hope my question is clear enough

    Read the article

  • Placing PHP array values into a javascript array?

    - by Michael Harringon
    Is these a way i can loop through a PHP array and have the data outputted into a JavaScript array? For example, the JS script below will not work var mon_Loop = <?php echo $rowCount_Mon ?>; var mon_Events = new Array(); for(i = 0; i < mon_Loop; i++) { mon_Events[i] = <?php $divMon[i] ?> } I Know its because the "i" is not a php variable so therefore invalid inside the php section, but its just an way to show what i would like to achieve. The $rowCount variable count the number of rows and is then used to for the loop. Lets say, for example that I want to place the contents of the PHP array "$divMon[0]" into the javascript array mon_Events[0]. I know that i can do it manually, like below mon_Events[0] = <?php echo $divMon[0] ?> But i have lots of these and therefore need the loop, Is there some JS or PHP that could do this? Cheers.

    Read the article

  • Question About TransactionScope in .NET

    - by peace
    using (TransactionScope scope = new TransactionScope()) { int updatedRows1 = custPh.Update(cust.CustomerID, tempPh1, 0); int updatedRows2 = custPh.Update(cust.CustomerID, tempPh2, 1); int updatedRows3 = cust.Update(); if (updatedRows1 > 0 && updatedRows2 > 0 && updatedRows3 > 0) { scope.Complete(); } } Is the above TransactionScope code structured correctly? This is my first time using it so i'm trying to make as simple as i can.

    Read the article

  • Selecting all but one field?

    - by gsquare567
    instead of SELECT * FROM mytable, i would like to select all fields EXCEPT one (namely, the 'serialized' field, which stores a serialized object). this is because i think that losing that field will speed up my query by a lot. however, i have so many fields and am quite the lazy guy. is there a way to say... `SELECT ALL_ROWS_EXCEPT(serialized) FROM mytable` ? thanks!

    Read the article

  • Building Stored Procedure to group data into ranges with roughly equal results in each bucket

    - by Len
    I am trying to build one procedure to take a large amount of data and create 5 range buckets to display the data. the buckets ranges will have to be set according to the results. Here is my existing SP GO /****** Object: StoredProcedure [dbo].[sp_GetRangeCounts] Script Date: 03/28/2010 19:50:45 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_GetRangeCounts] @idMenu int AS declare @myMin decimal(19,2), @myMax decimal(19,2), @myDif decimal(19,2), @range1 decimal(19,2), @range2 decimal(19,2), @range3 decimal(19,2), @range4 decimal(19,2), @range5 decimal(19,2), @range6 decimal(19,2) SELECT @myMin=Min(modelpropvalue), @myMax=Max(modelpropvalue) FROM xmodelpropertyvalues where modelPropUnitDescriptionID=@idMenu set @myDif=(@myMax-@myMin)/5 set @range1=@myMin set @range2=@myMin+@myDif set @range3=@range2+@myDif set @range4=@range3+@myDif set @range5=@range4+@myDif set @range6=@range5+@myDif select @myMin,@myMax,@myDif,@range1,@range2,@range3,@range4,@range5,@range6 select t.range as myRange, count(*) as myCount from ( select case when modelpropvalue between @range1 and @range2 then 'range1' when modelpropvalue between @range2 and @range3 then 'range2' when modelpropvalue between @range3 and @range4 then 'range3' when modelpropvalue between @range4 and @range5 then 'range4' when modelpropvalue between @range5 and @range6 then 'range5' end as range from xmodelpropertyvalues where modelpropunitDescriptionID=@idmenu) t group by t.range order by t.range This calculates the min and max value from my table, works out the difference between the two and creates 5 buckets. The problem is that if there are a small amount of very high (or very low) values then the buckets will appear very distorted - as in these results... range1 2806 range2 296 range3 75 range5 1 Basically I want to rebuild the SP so it creates buckets with equal amounts of results in each. I have played around with some of the following approaches without quite nailing it... SELECT modelpropvalue, NTILE(5) OVER (ORDER BY modelpropvalue) FROM xmodelpropertyvalues - this creates a new column with either 1,2,3,4 or 5 in it ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range1 and @range2 ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range2 and @range3 or maybe i could allocate every record a row number then divide into ranges from this?

    Read the article

  • Are GUID primary keys bad in theory, or just practice?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

  • Bulk inserting and updating with Entity Framework (Probably a better alternative?)

    - by Dave
    I have a data set of devices, addresses, and companies that I need to import into our database, with the catch that our database may already include a specific device/address/company that is included in the new data set. If that is the case, I need to update that entry with the new information in the data set, excluding addresses. We check if an exact copy of that address exists, otherwise we make a new entry. My issue is that it is very slow to attempt to grab a device/company in EF and if it exist updated it, otherwise insert it. To fix this I tried to get all the companies, devices, and addresses and insert them into respective hashmaps, and check if the identifier of the new data exists in the hashmap. This hasn't led to any performance increases. I've included my code below. Typically I would do a batch insert, I'm not sure what I would do for a batch update though. Can someone advise a different route? var context = ObjectContextHelper.CurrentObjectContext; var oldDevices = context.Devices; var companies = context.Companies; var addresses = context.Addresses; Dictionary<string, Company> companyMap = new Dictionary<string, Company>(StringComparer.OrdinalIgnoreCase); Dictionary<string, Device> deviceMap = new Dictionary<string, Device>(StringComparer.OrdinalIgnoreCase); Dictionary<string, Address> addressMap = new Dictionary<string, Address>(StringComparer.OrdinalIgnoreCase); foreach (Company c in companies) { if (c.CompanyAccountID != null && !companyMap.ContainsKey(c.CompanyAccountID)) companyMap.Add(c.CompanyAccountID, c); } foreach (Device d in oldDevices) { if (d.SerialNumber != null && !deviceMap.ContainsKey(d.SerialNumber)) deviceMap.Add(d.SerialNumber, d); } foreach (Address a in addresses) { string identifier = GetAddressIdentifier(a); if (!addressMap.ContainsKey(identifier)) addressMap.Add(identifier, a); } foreach (DeviceData.TabsDevice device in devices) { // update a device Company tempCompany; Address tempAddress; Device currentDevice; if (deviceMap.ContainsKey(device.SerialNumber)) //update a device deviceMap.TryGetValue(device.SerialNumber, out currentDevice); else // insert a new device currentDevice = new Device(); currentDevice.SerialNumber = device.SerialNumber; currentDevice.SerialNumberTABS = device.SerialNumberTabs; currentDevice.Model = device.Model; if (device.CustomerAccountID != null && device.CustomerAccountID != "") { companyMap.TryGetValue(device.CustomerAccountID, out tempCompany); currentDevice.CustomerID = tempCompany.CompanyID; currentDevice.CustomerName = tempCompany.CompanyName; } if (companyMap.TryGetValue(device.ServicingDealerAccountID, out tempCompany)) currentDevice.CompanyID = tempCompany.CompanyID; currentDevice.StatusID = 1; currentDevice.Retries = 0; currentDevice.ControllerFamilyID = 1; if (currentDevice.EWBFrontPanelMsgOption == null) // set the Panel option to the default if it isn't set already currentDevice.EWBFrontPanelMsgOption = context.EWBFrontPanelMsgOptions.Where( i => i.OptionDescription.Contains("default")).Single(); // link the device to the existing address as long as it is actually an address if (addressMap.TryGetValue(GetAddressIdentifier(device.address), out tempAddress)) { if (GetAddressIdentifier(device.address) != "") currentDevice.Address = tempAddress; else currentDevice.Address = null; } else // insert a new Address and link the device to it (if not null) { if (GetAddressIdentifier(device.address) == "") currentDevice.Address = null; else { tempAddress = new Address(); tempAddress.Address1 = device.address.Address1; tempAddress.Address2 = device.address.Address2; tempAddress.Address3 = device.address.Address3; tempAddress.Address4 = device.address.Address4; tempAddress.City = device.address.City; tempAddress.Country = device.address.Country; tempAddress.PostalCode = device.address.PostalCode; tempAddress.State = device.address.State; addresses.AddObject(tempAddress); addressMap.Add(GetAddressIdentifier(tempAddress), tempAddress); currentDevice.Address = tempAddress; } } if (!deviceMap.ContainsKey(device.SerialNumber)) // if inserting, add to context { oldDevices.AddObject(currentDevice); deviceMap.Add(device.SerialNumber, currentDevice); } } context.SaveChanges();

    Read the article

  • Speeding up PostgreSQL query where data is between two dates

    - by Roger
    I have a large table ( 50m rows) which has some data with an ID and timestamp. I need to query the table to select all rows with a certain ID where the timestamp is between two dates, but it currently takes over 2 minutes on a high end machine. I'd really like to speed it up. I have found this tip which recommends using a spatial index, but the example it gives is for IP addresses. However, the speed increase (436s to 3s) is impressive. How can I use this with timestamps?

    Read the article

  • IN statement performance in PostgreSQL (and in general)

    - by Vasil
    I know this has probably been asked before, but I can't find it with SO's search. Lets say i've TABLE1 and TABLE2, who should I expect the performance of a query such as this: SELECT * FROM TABLE1 WHERE id IN SUBQUERY_ON_TABLE2; as the number of rows in TABLE1 and TABLE2 grow and id is a primary key on TABLE1. Yes, I know using IN is such a n00b mistake, but TABLE2 has a generic relation (django generic relation) to multiple other tables so I can't think of another way to filter the data. At what (aproximate) ammount of rows in TABLE1 and TABLE2 should I expect to notice performance issues because of this? Will performance degrade linearly, exponentially etc. depending on the number of rows?

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • Is closing/disposing an SqlDataReader needed if you are already closing the sqlconnection?

    - by Brian
    I noticed This question, but my question is a bit more specific. Is there any advantage to using using (SqlConnection conn = new SqlConnection(conStr)) { using (SqlCommand command = new SqlCommand()) { // dostuff } } instead of using (SqlConnection conn = new SqlConnection(conStr)) { SqlCommand command = new SqlCommand(); // dostuff } Obviously it does matter run more than one command with the same connection, since closing an SqlDataReader is more efficient than closing and reopening a connection (calling conn.Close();conn.Open(); will also free up the connection). I see many people insist that failure to close the DataReader means leaving open connection resources around, but doesn't that only apply if you don't close the connection?

    Read the article

  • Extending Zend DB Table to include BETWEEN and LIMIT.

    - by davykiash
    Am looking for how I can extend the Zend_DB_Table below to accomodate a BETWEEN two dates syntax and LIMIT syntax My current construct is class Model_DbTable_Tablelist extends Zend_Db_Table_Abstract { protected $_name = 'mytable'; $select = $this->select() ->setIntegrityCheck(false) ->from('mytable', array('MyCol1', 'MyDate')); } I want it extended to be equivalent to the query below SELECT MyCol1,MyDate FROM mytable WHERE MyDate BETWEEN '2008-04-03' AND '2009-01-02' LIMIT 0,20 Any ideas?

    Read the article

  • collation in stored procedure

    - by Sharique
    I have a table which contains data in different languages. All fields are nvarchar(max). I created a stored procedure which trim values of all the fields Create Proc [dbo].[TrimValues] as update testdata set city = dbo.trim(city), state = dbo.trim(state), country = dbo.trim(country), schoolname = dbo.trim(schoolname) after trim all non-english text become ?????

    Read the article

  • Little Employee/Shift timetable HELP!!!

    - by DAVID
    Morning Guys, I have the following tables: operator(ope_id, ope_name) ope_shift(ope_id, shift_id, shift_date) shift(shift_id, shift_start, shift_end) here is a better view of the data http://latinunit.net/emp_shift.txt here is the screenshot of a select statement to the tables http://img256.imageshack.us/img256/4013/opeshift.jpg im using this code SELECT OPE_ID, COUNT(OPE_ID) AS Total_shifts from operator_shift group by ope_id; to view the current total shifts per operator and it works, BUT if there was 500 more rows it would count them all aswell, THE QUESTION is, anyone has a better way of making my database work, or how can i tell the system that those rows are a whole month, i remember i friend said something about count then devide by 30 but im not sure, what if the month isnt finished? and you want to show the emp with highest shifts to date

    Read the article

  • Avoiding Nested Queries

    - by Midhat
    How Important is it to avoid nested queries. I have always learnt to avoid them like a plague. But they are the most natural thing to me. When I am designing a query, the first thing I write is a nested query. Then I convert it to joins, which sometimes takes a lot of time to get right. And rarely gives a big performance improvement (sometimes it does) So are they really so bad. Is there a way to use nested queries without temp tables and filesort

    Read the article

  • what is wrong with connection string

    - by Hakan
    Can any one help me with this connection string. I can't manage how to fix. Dim constring As String Dim con As SqlCeConnection Dim cmd As SqlCeCommand constring = "(System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase) + \\database.sdf;Password=pswrd;File Mode=shared read" con = New SqlCeConnection() con.Open() Thanks

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • How to select only the first rows for each unique value of a column

    - by nuit9
    Let's say I have a table of customer addresses: CName | AddressLine ------------------------------- John Smith | 123 Nowheresville Jane Doe | 456 Evergreen Terrace John Smith | 999 Somewhereelse Joe Bloggs | 1 Second Ave In the table, one customer like John Smith can have multiple addresses. I need the select query for this table to return only first row found where there are duplicates in 'CName'. For this table it should return all rows except the 3rd (or 1st - any of those two addresses are okay but only one can be returned). Is there a keyword I can add to the SELECT query to filter based on whether the server has already seen the column value before?

    Read the article

  • How to use a getter with a nullable?

    - by Desmond Lost
    I am reading a bunch of queries from a database. I had an issue with the queries not closing, so I added a CommandTimeout. Now, the individual queries read from the config file each time they are run. How would I make the code cache the int from the config file only once using a static nullable and getter. I was thinking of doing something along the lines of: static int? var; get{ var = null; if (var.HasValue) ...(i dont know how to complete the rest) My actual code: private object ExecuteQuery(string dbConnStr, bool fixIt) { object result = false; using (SqlConnection connection = new SqlConnection(dbConnStr)) { connection.Open(); using (SqlCommand sqlCmd = new SqlCommand()) { AddSQLParms(sqlCmd); sqlCmd.CommandTimeout = 30; sqlCmd.CommandText = _cmdText; sqlCmd.Connection = connection; sqlCmd.CommandType = System.Data.CommandType.Text; sqlCmd.ExecuteNonQuery(); } connection.Close(); } return result; }}

    Read the article

  • Using 'in' in Join

    - by Ruslan
    i have two selects a & b and i join them like: select * from ( select n.id_b || ',' || s.id_b ids, n.name, s.surname from names n, surnames s where n.id_a = s.id_a ) a left join ( select sn.id, sn.second_name ) b on b.id in (a.ids) in this case join doesn't work :( The problem is in b.id in (a.ids). But why if it looks like 12 in (12,24) and no result :(

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >