Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 43/1506 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Speeding up ROW_NUMBER in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • How to find virtualization performance bottlenecks?

    - by Martin
    We have recently started moving our C++ build server(s) from real machines into VMs. (MS Hyper-V) We have some performance issues that I've currently no idea how to address. We have: Test-Box - this is a piece of desktop workstation hardware my co-worker used to set up the VM before we moved it to the actual server hardware Srv-Box - this is the server hardware Test-Box-Real - This is Windows running directly on the Test-Box HW Test-Box-VM - This is Windows in a Hyper-V VM on the Test-Box HW Srv-Box-Real- This is Server2008R2 running on the Srv-Box HW. Srv-Box-VM- This is Windows running in a Hyper-V VM on the Srv-Box HW, i.e. on Srv-Box-Real. Now, the problem is that we compared Build times between Test-Box-Real and Test-Box-VM and they were basically equal (within about 2%). Then we moved the VM to the Srv-Box machine and what we saw there is that we have a significant performance degradation between Srv-Box-Real and Srv-Box-VM, that is, where we saw no differences on the Test HW we now do see major differences in performance on the actual Server HW. (Builds about ~~ 50% slower inside the VM.) I should add that both the Test-Box and the Srv-Box are only running this one single VM and doing nothing else. I should also note that the "Real" OS is Win2008R2(64bit) and the VM hosted OS is Wind2003R2(32bit). Hardware specs: Srv-Box: Intel XEON E5640 @ 2.67Ghz (This means 8 cores with hyperthreading on the Real system and "only" 4 cores on the VM, since Hyper-V doesn't allow for hyperthreading, but number of cores doesn't seem to explain the problem here.) 16GB RAM (we have 4GB assigned to the VM) Virtual DELL RAID 1 (2x 450GB HUS156045VLS600 Hitachi 15k SAS drives) Test-Box: Intel XEON E31245 @ 3.3GHz 16GB RAM WD VelociRaptor 600GB 10k RPM SATA Note again that I'm only concerned with the differences between Srv-Box-Real and Srv-Box-VM (high) vs. the differences seen btw. Test-Box-Real and Test-Box-VM (low). Why would one machine have parity when comparing VM vs Real performance and the other (server grade HW no less) would have a large disparity? (Both being XEON CPUs ...)

    Read the article

  • SSIS Catalog, Windows updates and deployment failures due to System.Core mismatch

    - by jamiet
    This is a heads-up for anyone doing development on SSIS. On my current project where we are implementing a SQL Server Integration Services (SSIS) 2012 solution we recently encountered a situation where we were unable to deploy any of our projects even though we had successfully deployed in the past. Any attempt to use the deployment wizard resulted in this error dialog: The text of the error (for all you search engine crawlers out there) was: A .NET Framework error occurred during execution of user-defined routine or aggregate "create_key_information": System.IO.FileLoadException: Could not load file or assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) ---> System.IO.FileLoadException: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) System.IO.FileLoadException: System.IO.FileLoadException:     at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateSymmetricKey(String algorithm)    at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateKeyInformation(SqlString algorithmName, SqlBytes& key, SqlBytes& IV) . (Microsoft SQL Server, Error: 6522) After some investigation and a bit of back and forth with some very helpful members of the SSIS product team (hey Matt, Wee Hyong) it transpired that this was due to a .Net Framework fix that had been delivered via Windows Update. I took a look at the server update history and indeed there have been some recently applied .Net Framework updates: This fix had (in the words of Matt Masson) “somehow caused a mismatch on System.Core for SQLCLR” and, as you may know, SQLCLR is used heavily within the SSIS Catalog. The fix was pretty simple – restart SQL Server. This causes the assemblies to be upgraded automatically. If you are using Data Quality Services (DQS) you may have experienced similar problems which are documented at Upgrade SQLCLR Assemblies After .NET Framework Update. I am hoping the SSIS team will follow-up with a more thorough explanation on their blog soon. You DBAs out there may be questioning why Windows Update is set to automatically apply updates on our production servers. We’re checking that out with our hosting provider right now You have been warned! @Jamiet

    Read the article

  • SQL Server PowerShell Provider And PowerShell Version 2 Get-Command Issue

    - by BuckWoody
    The other day I blogged that the version of the SQL Server PowerShell provider (sqlps) follows the version of PowerShell. That’s all goodness, but it has appeared to cause an issue for PowerShell 2.0. the Get-Command PowerShell command-let returns an error (Object reference not set to an instance of an object) if you are using PowerShell 2.0 and sqlps – it’s a known bug, and I’m happy to report that it is fixed in SP2 for SQL Server 2008 – something that will released soon. You can read more about this issue here: http://connect.microsoft.com/SQLServer/feedback/details/484732/sqlps-and-powershell-v2-issues Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Linux Server Performance Monitoring

    - by Jon
    I'm looking to monitor performance on my Linux servers (which happen to be Centos). What are the best tools for monitoring things in realtime such as: Disk Performance I/O, swapping etc.. CPU Performance Looking for low level tools, rather than web based tools such as Nagios, Ganglia etc... n.b. I'd like to know exactly what each tool does rather than just having a list of random toolnames if possible please. Why the tool is a better option than others would be good also.

    Read the article

  • Linux Server Performance Monitoring

    - by Jon
    I'm looking to monitor performance on my Linux servers (which happen to be Centos). What are the best tools for monitoring things in realtime such as: Disk Performance I/O, swapping etc.. CPU Performance Looking for low level tools, rather than web based tools such as Nagios, Ganglia etc... n.b. I'd like to know exactly what each tool does rather than just having a list of random toolnames if possible please. Why the tool is a better option than others would be good also.

    Read the article

  • Can't restore backup from SQL Server 2008 R2 to SQL Server 2005 or 2008

    - by Erick
    Hi everyone, I'm trying to get a backup from SQL Server 2008 R2 restored to SQL Server 2008, but when we try to do the restore we get this: The database was backed up on a server running version 10.50.1092. That version is incompatible with this server, which is running version 10.00.2531. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. I can use the script wizard to generate a script, but that takes over an hour to run. I also tried just exporting the data from server to server, but it had issues with the primary keys/identity columns. I will be running into this issue with several other clients so any help you could offer about how to get around this would be great. Thanks for your help!

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Replication with SQL Server 2005 Express Edition and SQL Compact Edition 3.5

    - by Andy Gable
    I need some information on SQL Server 2005 Express edition. What I want to do is have my central database servin local machine databases IE back office Cental database |------------------- Shop floor Terminal 1 |------------------- Shop Floor Terminal 2 |------------------- Shop Floor Terminal 3 |------------------- Shop Floor Terminal 4 |------------------- Shop Floor Terminal 5 |------------------- Shop Floor Terminal 6 I want is so that Shop floor terminals would PULL down ANY changes to the database as and when they happen (selected changes are needed change would be Add new item / Edit Item info that is used by Shop floor terminal (ie price, description, sale group) Is this possible with SQL 2005? I have the ability to make my own Sync Applciation but I would need to know what to look for in the database that trigers a update Many thanks for any advice you can give Andy

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • SQL Server Column Level Encryption - Rotating Keys

    - by BarDev
    We are thinking about using SQL Server Column (cell) Level Encryption for sensitive data. There should be no problem when we initially encryption the column, but we have requirements that every year the Encryption Key needs to change. It seems that this requirement may be problem. Assumption: The table that includes the column that has sensitive data will have 500 million records. Below are the steps we have thought about implementing. During the encryption/decryption process is the data online, and also how long would this process take? Initially encrypt the column New Year Decrypt the column Encrypt the column with new key. Question : When the column is being decrypted/encrypted is the data online (available to be query)? Does SQL Server provide feature that allows for key changes while the data is online? BarDev

    Read the article

  • Having problems with connecting to/seeing the local SQL server with Microsoft SQL Server Management Studio

    - by Hans-Henrik
    I'm having some difficulties when I'm trying to connect to my local SQL Server. I'm pretty sure the server is running (many of the other topics on this subject suggests that the services might not be running, so I kinda looked into it, but they do seem to be running). But when I try to access it through Microsoft SQL Server Management Studio it doesn't seem to be able to find them. Server type: Database Engine Server name: ILIZANESQL* - I'm trying to "browse for more..." to find my server, but it doesn't show up Authentication: Windows Authentication

    Read the article

  • Sql Server 2008 Create Foreign Key Manually

    - by tgriffiths
    I have inherited an old database which wasn't designed very well. It is a Sql Server 2008 database which is missing quite a lot of Foreign Key relationships. Below shows two of the tables, and I am trying to manually create a FK relationship between dbo.app_status.status_id and dbo.app_additional_info.application_id I am using SQL Server Management Studio when trying to create the relationship using the query below USE myDatabase; GO ALTER TABLE dbo.app_additional_info ADD CONSTRAINT FK_AddInfo_AppStatus FOREIGN KEY (application_id) REFERENCES dbo.app_status (status_id) ON DELETE CASCADE ON UPDATE CASCADE ; GO However, I receive this error when I run the query The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_AddInfo_AppStatus". The conflict occurred in database "myDatabase", table "dbo.app_status", column 'status_id'. I am wondering if the query is failing because each table already contains approximately 130,000 records? Please help. Thanks.

    Read the article

  • Automating XNA Performance Testing?

    - by Grofit
    I was wondering what peoples approaches or thoughts were on automating performance testing in XNA. Currently I am looking at only working in 2d, but that poses many areas where performance can be improved with different implementations. An example would be if you had 2 different implementations of spatial partitioning, one may be faster than another but without doing some actual performance testing you wouldn't be able to tell which one for sure (unless you saw the code was blatantly slow in certain parts). You could write a unit test which for a given time frame kept adding/updating/removing entities for both implementations and see how many were made in each timeframe and the higher one would be the faster one (in this given example). Another higher level example would be if you wanted to see how many entities you can have on the screen roughly without going beneath 60fps. The problem with this is to automate it you would need to use the hidden form trick or some other thing to kick off a mock game and purely test which parts you care about and disable everything else. I know that this isnt a simple affair really as even if you can automate the tests, really it is up to a human to interpret if the results are performant enough, but as part of a build step you could have it run these tests and publish the results somewhere for comparison. This way if you go from version 1.1 to 1.2 but have changed a few underlying algorithms you may notice that generally the performance score would have gone up, meaning you have improved your overall performance of the application, and then from 1.2 to 1.3 you may notice that you have then dropped overall performance a bit. So has anyone automated this sort of thing in their projects, and if so how do you measure your performance comparisons at a high level and what frameworks do you use to test? As providing you have written your code so its testable/mockable for most parts you can just use your tests as a mechanism for getting some performance results... === Edit === Just for clarity, I am more interested in the best way to make use of automated tests within XNA to track your performance, not play testing or guessing by manually running your game on a machine. This is completely different to seeing if your game is playable on X hardware, it is more about tracking the change in performance as your game engine/framework changes. As mentioned in one of the comments you could easily test "how many nodes can I insert/remove/update within QuadTreeA within 2 seconds", but you have to physically look at these results every time to see if it has changed, which may be fine and is still better than just relying on playing it to see if you notice any difference between version. However if you were to put an Assert in to notify you of a fail if it goes lower than lets say 5000 in 2 seconds you have a brittle test as it is then contextual to the hardware, not just the implementation. Although that being said these sort of automated tests are only really any use if you are running your tests as some sort of build pipeline i.e: Checkout - Run Unit Tests - Run Integration Tests - Run Performance Tests - Package So then you can easily compare the stats from one build to another on the CI server as a report of some sort, and again this may not mean much to anyone if you are not used to Continuous Integration. The main crux of this question is to see how people manage this between builds and how they find it best to report upon. As I said it can be subjective but as knowledge will be gained from the answers it seems a worthwhile question.

    Read the article

  • New Cumulative Updates for SQL Server 2008 SP1 & R2!

    - by AaronBertrand
    Well, this is the first time in a long time that I've blogged about cumulative updates for two different versions of SQL Server on the same day. Yesterday Microsoft released a cumulative update for SQL Server 2008 SP1 (bringing you to 10.0.2775), and a corresponding cumulative update for SQL Server 2008 R2 RTM (bringing you from 10.50.1600 to 10.50.1702). You can read more about these updates here: Cumulative Update #1 for SQL Server 2008 R2 RTM ( KB #981355 ) Cumulative Update #8 for SQL Server...(read more)

    Read the article

  • New Cumulative Updates for SQL Server 2008 SP1 & R2!

    - by AaronBertrand
    Well, this is the first time in a long time that I've blogged about cumulative updates for two different versions of SQL Server on the same day. Yesterday Microsoft released a cumulative update for SQL Server 2008 SP1 (bringing you to 2775), and a corresponding cumulative update for SQL Server 2008 R2 RTM (bringing you from 1600 to 1702). You can read more about these updates here: Cumulative Update #1 for SQL Server 2008 R2 RTM ( KB #981355 ) Cumulative Update #8 for SQL Server 2008 Service Pack...(read more)

    Read the article

  • INNER JOIN vs LEFT JOIN performance in SQL Server

    - by Ekkapop
    I've created SQL command that use INNER JOIN for 9 tables, anyway this command take a very long time (more than five minutes). So my folk suggest me to change INNER JOIN to LEFT JOIN because the performance of LEFT JOIN is better, at first time its despite what I know. After I changed, the speed of query is significantly improve. I want to know why LEFT JOIN is faster than INNER JOIN? My SQL command look like below: SELECT * FROM A INNER JOIN B ON ... INNER JOIN C ON ... INNER JOIN D and so no

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • SQL Server: How to shrink FileStream files?

    - by J4N
    For a project, I'm using a SQL Server 2008 R2. One table has a filestream column. I've made some load tests, and now the database has ~20GB used. I've empty tables, except several(configuration tables). But my database was still using a lot of space. So I used the Task -> Shrink -> Database / Files But my database is still using something like 16GB. I found that it's the filestream file is still using a lot of space. The problem is that I need to backup this database to export it on the final production server, and event if I indicate to compress the backup I got a file more than 3.5Go. Not convenient to store and upload. And I'm planning much bigger test, so I want to know how to shrink that empty space. When I'm trying: I get this exception: The properties SIZE, MAXSIZE, or FILEGROWTH cannot be specified for the FILESTREAM data file 'FileStreamFile'. (Microsoft SQL Server, Error: 5509) So what should I do? I found several topics with this error but they was about removing the filestream column.

    Read the article

  • An XEvent a Day (15 of 31) – Tracking Ghost Cleanup

    - by Jonathan Kehayias
    If you don’t know anything about Ghost Cleanup, I recommend highly that you go read Paul Randal’s blog posts Inside the Storage Engine: Ghost cleanup in depth , Ghost cleanup redux , and Turning off the ghost cleanup task for a performance gain .  To my knowledge Paul’s posts are the only things that cover Ghost Cleanup at any level online. In this post we’ll look at how you can use Extended Events to track the activity of Ghost Cleanup inside of your SQL Server.  To do this, we’ll first...(read more)

    Read the article

  • SQL Solstice

    - by andyleonard
    Introduction My friends in North Carolina have decided to create a new event called SQL Solstice . Details: 18 - 20 Aug 2011 Holiday Inn Brownstone & Conference Center 1707 Hillsborough Street - Raleigh, NC 27605 Toll Free 800-331-7919 18 Aug - A Day of Deep Dives ($259) Day-long presentations delivered by folks with real-world, hands-on experience. Louis Davidson on Database Design Andrew Kelly on Performance Tuning Jessica M. Moss on Reporting Services Ed Wilson on Powershell (me) on SSIS 19...(read more)

    Read the article

  • An XEvent a Day (26 of 31) – Configuring Session Options

    - by Jonathan Kehayias
    There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked...(read more)

    Read the article

  • SQL Saturday #274 Slovenia

    - by Dejan Sarka
    Yes, here it is SQL Saturday #274 is coming to Slovenia (#sqlsatSlovenia). The event will take place on Saturday, December 21st, at company pixi* labs, Informacijske tehnologije, d.o.o. Poslovna cona A 2 SI-4208 Šencur This company generously offered to host the event. We, the whole Slovenian SQL Server community, are very grateful for this. At this time, a call for speakers went out, and we are already getting the first proposals. We are especially happy that we will get possibility to show the foreign speakers how beautiful Slovenia and especially the capital Ljubljana is in December. Expect a lot of partying right on the streets, no matter of weather. Be prepared, we have slightly weird customs when it comes to drinks. For example, our regular special discount offer is not three drinks for the price of two; it is six drinks for the price of five. If you are a speaker or want to become one, consider sending a proposal. Since most of the sessions will be held in English and you don’t want to speak, consider coming as a visitor as well. Or maybe you would be interested to become a sponsor. Although we are targeting a low budgeted event, any kind of sponsorship is very welcome. Please feel free to contact the organizers if you are interested to become a sponsor: Matija Lah – [email protected], Mladen Prajdic - [email protected], or Dejan Sarka  - [email protected]. Looking forward to see you all!

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >