Search Results

Search found 36788 results on 1472 pages for 'sql 2008'.

Page 629/1472 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Reproducing a Conversion Deadlock

    - by Alexander Kuznetsov
    Even if two processes compete on only one resource, they still can embrace in a deadlock. The following scripts reproduce such a scenario. In one tab, run this: CREATE TABLE dbo.Test ( i INT ) ; GO INSERT INTO dbo.Test ( i ) VALUES ( 1 ) ; GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ; BEGIN TRAN SELECT i FROM dbo.Test ; --UPDATE dbo.Test SET i=2 ; After this script has completed, we have an outstanding transaction holding a shared lock. In another tab, let us have that another connection have...(read more)

    Read the article

  • OT: March Mdness 2011

    - by RickHeiges
    This past fall, I decided to take a break from Fantasy Football. Did I miss it? Yes to some extent. Fantasy Football can really eat up a lot of time. But - I still love March Madness (NCAA Men's Basketball Tourney). It doesn't take much time to pick out teams. Since you can't make any changes after the deadline and the computer keeps track of scoring/scenarios/etc, it is a fun thing that really takes a little time and can help you enjoy the games a bit more. Let's see how good you are at picking...(read more)

    Read the article

  • Data Education: Great Classes Coming to a City Near You

    - by Adam Machanic
    In case you haven't noticed, Data Education (the training company I started a couple of years ago) has expanded beyond the US northeast; we're currently offering courses with top trainers in both St. Louis and Chicago , as well as the Boston area. The courses are starting to fill up fast—not surprising when you consider we’re talking about experienced instructors like Kalen Delaney , Rob Farley , and Allan Hirt —but we have still have some room. We’re very excited about bringing the highest quality...(read more)

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Slow in the Application but Fast in SQL Server Management Studio - from Erland

    - by Greg Low
    Our MVP buddy Erland Sommarskog doesn't post articles that often but when he does, you should read them. His latest post is here: http://www.sommarskog.se/query-plan-mysteries.html It talks about why a query might be slow when sent from an application but fast when you execute it in SSMS. But it covers way more than that. There is a great deal of good info on how queries are executed and query plans generated. Highly recommended!...(read more)

    Read the article

  • Powershell, SMO and Database Files

    - by dbaduck
    In response to some questions about renaming a physical file for a database, I have 2 versions of Powershell scripts that do this for you, including taking the database offline and then online to make the physical change match the meta-data. First, there is an article about this at http://msdn.microsoft.com/en-us/library/ms345483.aspx . This explains that you start by setting the database offline, then alter the database and modify the filename then set it back online. This particular article does...(read more)

    Read the article

  • SQL: How do I INSERT primary key values from two tables INTO a master table.

    - by Stefan
    Hello, I would appreciate some help with an SQL statement I really can't get my head around. What I want to do is fairly simple, I need to take the values from two different tables and copy them into an master table when a new row is inserted into one of the two tables. The problem is perhaps best explained like this: I have three tables, productcategories, regioncategories and mastertable. --------------------------- TABLE: PRODUCTCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: BOOKS | Books --------------------------- --------------------------- TABLE: REGIONCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: EU | European Union --------------------------- --------------------------- TABLE: MASTERTABLE --------------------------- COLUMNS: REGION | PRODUCT --------------------------- VALUES: EU | BOOKS --------------------------- I want the values to be inserted like this when a new row is created in either productcategories or regioncategories. New row is created. --------------------------- TABLE: PRODUCTCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: BOOKS | Books --------------------------- VALUES: DVD | DVDs --------------------------- And a SQL statement copies the new values into the mastertable. --------------------------- TABLE: MASTERTABLE --------------------------- COLUMNS: REGION | PRODUCT --------------------------- VALUES: EU | BOOKS --------------------------- VALUES: EU | DVD --------------------------- The same goes if a row is created in the regioncategories. New row. --------------------------- TABLE: REGIONCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: EU | European Union --------------------------- VALUES: US | United States --------------------------- Copied to the mastertable. --------------------------- TABLE: MASTERTABLE --------------------------- COLUMNS: REGION | PRODUCT --------------------------- VALUES: EU | BOOKS --------------------------- VALUES: EU | DVD --------------------------- VALUES: US | BOOKS --------------------------- VALUES: US | DVD --------------------------- I hope it makes sense. Thanks, Stefan

    Read the article

  • sql statement supposed to have 2 distinct rows, but only 1 is returned. for C# windows

    - by jello
    yeah so I have an sql statement that is supposed to return 2 rows. the first with psychological_id = 1, and the second, psychological_id = 2. here is the sql statement select * from psychological where patient_id = 12 and symptom = 'delire'; But with this code, with which I populate an array list with what is supposed to be 2 different rows, two rows exist, but with the same values: the second row. OneSymptomClass oneSymp = new OneSymptomClass(); ArrayList oneSympAll = new ArrayList(); string connStrArrayList = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; string queryStrArrayList = "select * from psychological where patient_id = " + patientID.patient_id + " and symptom = '" + SymptomComboBoxes[tag].SelectedItem + "';"; using (var conn = new SqlConnection(connStrArrayList)) using (var cmd = new SqlCommand(queryStrArrayList, conn)) { conn.Open(); using (SqlDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { oneSymp.psychological_id = Convert.ToInt32(rdr["psychological_id"]); oneSymp.patient_history_date_psy = (DateTime)rdr["patient_history_date_psy"]; oneSymp.strength = Convert.ToInt32(rdr["strength"]); oneSymp.psy_start_date = (DateTime)rdr["psy_start_date"]; oneSymp.psy_end_date = (DateTime)rdr["psy_end_date"]; oneSympAll.Add(oneSymp); } } conn.Close(); } OneSymptomClass testSymp = oneSympAll[0] as OneSymptomClass; MessageBox.Show(testSymp.psychological_id.ToString()); the message box outputs "2", while it's supposed to output "1". anyone got an idea what's going on?

    Read the article

  • Find Duplicate Fields in a Table

    - by Derek Dieter
    A common scenario when querying tables is the need to find duplicate fields within the same table. To do this is simple, it requires utilizing the GROUP BY clause and counting the number of recurrences. For example, lets take a customers table. Within the customers table, we want to find all the [...]

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Find Duplicate Items in a Table

    - by Derek Dieter
    A very common scenario when querying tables is the need to find duplicate items within the same table. To do this is simple, it requires utilizing the GROUP BY clause and counting the number of recurrences. For example, lets take a customers table. Within the customers table, we want to find all [...]

    Read the article

  • Dynamic Number Table

    - by Derek D.
    Using a numbers table is helpful for many things. Like finding gaps in a supposed sequence of primary keys, or generating date ranges or any numerical range. In some cases, you will be in a production system that does not already contain a numbers table and you will also be unable to add [...]

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Yet another use of OUTER APPLY in defensive programming

    - by Alexander Kuznetsov
    When a SELECT is used to populate variables from a subquery, it fails to change them if the subquery returns nothing - and that can lead to subtle bugs. We shall use OUTER APPLY to eliminate this problem. Prerequisites All we need is the following mock function that imitates a subquery: CREATE FUNCTION dbo.BoxById ( @BoxId INT ) RETURNS TABLE AS RETURN ( SELECT CAST ( 1 AS INT ) AS [Length] , CAST ( 2 AS INT ) AS [Width] , CAST ( 3 AS INT ) AS [Height] WHERE @BoxId = 1 ) ; Let us assume that this...(read more)

    Read the article

  • SQL Server v.Next (Denali) : Breaking change to fn_virtualfilestats

    - by AaronBertrand
    Yesterday I posted a general warning about changes to Denali that will potentially break your existing code base, with a strong suggestion to grab the summer CTP as soon as it is available and start testing. I posted an example of a breaking change that will not be documented since it affects a commonly-used but undocumented DBCC command (DBCC LOGINFO), and also mentioned a couple of other changes in passing (). Today it occurred to me that it may be more useful if, when I come across a potential...(read more)

    Read the article

  • How to represent and insert into an ordered list in SQL?

    - by Travis
    I want to represent the list "hi", "hello", "goodbye", "good day", "howdy" (with that order), in a SQL table: pk | i | val ------------ 1 | 0 | hi 0 | 2 | hello 2 | 3 | goodbye 3 | 4 | good day 5 | 6 | howdy 'pk' is the primary key column. Disregard its values. 'i' is the "index" that defines that order of the values in the 'val' column. It is only used to establish the order and the values are otherwise unimportant. The problem I'm having is with inserting values into the list while maintaining the order. For example, if I want to insert "hey" and I want it to appear between "hello" and "goodbye", then I have to shift the 'i' values of "goodbye" and "good day" (but preferably not "howdy") to make room for the new entry. So, is there a standard SQL pattern to do the shift operation, but only shift the elements that are necessary? (Note that a simple "UPDATE table SET i=i+1 WHERE i=3" doesn't work, because it violates the uniqueness constraint on 'i', and also it updates the "howdy" row unnecessarily.) Or, is there a better way to represent the ordered list? I suppose you could make 'i' a floating point value and choose values between, but then you have to have a separate rebalancing operation when no such value exists. Or, is there some standard algorithm for generating string values between arbitrary other strings, if I were to make 'i' a varchar? Or should I just represent it as a linked list? I was avoiding that because I'd like to also be able to do a SELECT .. ORDER BY to get all the elements in order.

    Read the article

  • Where are TFS Alerts stored in the TFS Databases? Receiving duplicate alerts after upgrade 2008 to

    - by MJ Hufford
    I recently performed a migration-upgrade from TFS 2008 to TFS 2010. Almost everything is working properly now. However, our team is getting duplicate emails now. I'm guessing this is because I used the TFS 2008 power tools to setup alerts. After the upgrade, I installed the TFS 2010 power tools and noticed that there were not alerts configured. I setup new alerts and now we get duplicates. Is it possible the old alerts configuration is floating around in the db somewhere?

    Read the article

  • Regular expression for finding non-breaking string names in code and then breaking them up for SQL q

    - by Rob Segal
    I am trying to devlop a regex for finding camel case strings in several code files I am working with so I can break them up into separate words for use in a SQL query. I have strings of the form... EmailAddress FirstName MyNameIs And I want them like this... Email Address First Name My Name Is An example SQL query which I currently have is... select FirstName, MyNameIs from MyTables I need the queries in the form... select FirstName as 'First Name', MyNameIs as 'My Name Is' from MyTables Any time a new capital letter appears that should be a new grouping which I can pick out of the matched string. I currently have the following regex... ([A-Z][a-z]+)+ Which does match the cases I have shown above but when I want to perform a replace I need to define groups. Currently I have tried... (([A-Z])([a-z]+))+ Which sort of works. It will pick out "Address" as the first grouping from "EmailAddress" as opposed to "Email" which is what I was expecting. No doubt there is something I'm misunderstanding here so any help is greatly appreciated.

    Read the article

  • 80% off for SQL Azure!

    - by Hugo Kornelis
    I have spent the last three days at SQLBits X in London – a truly great experience! There were lots of quality sessions, but I also enjoyed meeting new people and catching up with old friends. One of these friends (and I hope he’s still a friend after I post this) is Buck Woody . Not only a great and humorous speaker, but also a very nice fellow – for those who don’t mind being teased every now and then. When we were chatting, he told me that he was planning to announce a special access code to allow...(read more)

    Read the article

  • Approaching events #mstc11 #ppws #sqlbits

    - by Marco Russo (SQLBI)
    The spring season is always full of events and I’m just preparing for a number of them. First of all, we are getting very good interest for the PowerPivot Workshop in Copenhagen on 21-22 March 2011. Tomorrow (Friday March 4) will be the last day to take advantage of the Early Bird rate for this date. We will also participate to an evening meeting of local user groups on March 21 in Copenhagen, more news about this in the next few days. Other scheduled dates are in Dublin (28-29 March 2011) and in...(read more)

    Read the article

  • SQL Is it possible to setup a column that will contain a value dependent on another column?

    - by Wesley
    I have a table (A) that lists all bundles created off a machine in a day. It lists the date created and the weight of the bundle. I have an ID column, a date column, and a weight column. I also have a table (B) that holds the details related to that machine for the day. In that table (B), I want a column that lists a sum of weights from the other table (A) that the dates match on. So if the machine runs 30 bundles in a day, I'll have 30 rows in table (A) all dated the same day. In table (B) I'll have 1 row detailing other information about the machine for the day plus the column that holds the total bundle weight created for the day. Is there a way to make the total column in table (B) automatically adjust itself whenever a row is added to table (A)? Is this possible to do in the table schema itself rather than in an SQL statement each time a bundle is added? If it's not, what sort of SQL statement do I need? Wes

    Read the article

  • How can I map stored procedure result into a custom class with linq-to-sql?

    - by Remnant
    I have a stored procedure that returns a result set (4 columns x n Rows). The data is based on multiple tables within my database and provides a summary for each department within a corporate. Here is sample: usp_GetDepartmentSummary DeptName EmployeeCount Male Female HR 12 5 7 etc... I am using linq-to-sql to retrieve data from my database (nb - have to use sproc as it is something I have inherited). I would like to call the above sproc and map into a department class: public class Department { public string DeptName {get; set;} public int EmployeeCount {get; set;} public int MaleCount {get; set;} public int FemaleCount {get; set;} } In VS2008, I can drag and drop my sproc onto the methods pane of the linq-to-sql designer. When I examine the designer.cs the return type for this sproc is defined as: ISingleResult<usp_GetDepartmentSummaryResult> What I would like to do is amend this somehow so that it returns a Department type so that I can pass the results of the sproc as a strongly typed view: <% foreach (var dept in Model) { %> <ul> <li class="deptname"><%= dept.DeptName %></li> <li class="deptname"><%= dept.EmployeeCount %></li> etc... Any ideas how to achieve this? NB - I have tried amending the designer.cs and dbml xml file directly but with limited success. I admit to being a little out of my depth when it comes to updating those files directly and I am not sure it is best practice? Would be good to get some diretion. Thanks much

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >