Search Results

Search found 93962 results on 3759 pages for 'server configuration'.

Page 15/3759 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • SQL SERVER – 2014 CTP1 Available for Download – SQL SERVER 2014 Community Technology Preview 1

    - by Pinal Dave
    Microsoft announced that SQL Server 2014 CTP 1 available to download at TechEd Europe. You can download SQL Server 2014 CTP1 from here. Additionally, there is in depth documentation of the product in the Product Guide over here. In this blog post I have in depth discussed what are the salient features which I was looking forward in the new version. Always On supports now 8 secondaries instead of 4 Online Indexing at partition level – this is a good thing as now index rebuilding can be done at a partition level Statistics at the partition level – this will be a huge improvement in performance In-Memory OLTP works by providing in-application memory storage for the most often used tables in SQL Server. Columnstore Index can be updated – I just can’t wait for this feature (Columnstore Index) Resource Governor can control IO along with CPU and Memory Increase performance by extending SQL Server in-memory buffer pool to SSDs Backup to Azure Storage You can read about the new features of the SQL Server 2014 in the following links: What’s New (Database Engine) What’s New in Analysis Services and Business Intelligence What’s New (Integration Services) What’s New (Replication) What’s New (Reporting Services) Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Service Pack, SQL Tips and Tricks, T SQL, Technology Tagged: CTP

    Read the article

  • SQL SERVER – Out of the Box – Activty and Performance Reports from SSSMS

    - by pinaldave
    SQL Server management Studio 2008 is wonderful tool and has many different features. Many times, an average user does not use them as they are not aware about these features. Today, we will learn one such feature. SSMS comes with many inbuilt performance and activity reports, but we do not use it to the full potential. Let us see how we can access these standard reports. Connect to SQL Server Node >> Right Click on it >> Go to Reports >> Click on Standard Reports >> Pick Any Report. Click to Enlarge You can see there are many reports, which an average users needs right away, are available there. Let me list all the reports available. Server Dashboard Configuration Changes History Schema Changes History Scheduler Health Memory Consumption Activity – All Blocking Transactions Activity – All Cursors Activity – All Sessions Activity – Top Sessions Activity – Dormant Sessions Activity -  Top Connections Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Performance – Batch Execution Statistics Performance – Object Execution Statistics Performance – Top Queries by Average CPU Time Performance – Top Queries by Average IO Performance – Top Queries by Total CPU Time Performance – Top Queries by Total IO Service Broker Statistics Transactions Log Shipping Status In fact, when you look at the above list, it is fairly clear that they are very thought out and commonly needed reports that are available in SQL Server 2008. Let us run a couple of reports and observe their result. Performance – Top Queries by Total CPU Time Click to Enlarge Memory Consumption Click to Enlarge There are options for custom reports as well, which we can configure. We will learn about them in some other post. Additionally, you can right click on the reports and export in Excel or PDF. I think this tool can really help those who are just looking for some quick details. Does any of you use this feature, or this feature has some limitations and You would like to see more features? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Change Database Access to Single User Mode Using SSMS

    - by pinaldave
    I have previously written about how using T-SQL Script we can convert the database access to single user mode before backup. I was recently asked if the same can be done using SQL Server Management Studio. Yes! You can do it from database property (Write click on database and select database property) and follow image. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MMC and Server Manager Authentication Errors - Access Denied

    - by Vazgen
    I'm trying to connect remotely from my Windows 8 client to manage my Hyper-V Server 2012. I have done everything I can find to configure remote management of the server including: Added a net user on server Enabled anonymous dcom access on server and client Added firewall rules for "Windows Firewall Remote Management" and "Windows Management Instrumentation (WMI)" on server Added firewall exception on server for client IP Added cmdkey on client Added server to TrustedHost list on client Added LocalAccountTokenFilter policy registry entry on server Added client IP to server's host file Added server IP to client's host file I cannot believe I am still getting these errors. What's even more strange is that I can connect in Hyper-V Manager and create VM's but not in MMC and Server Manager. I also get Access Denied trying to Open the Authorization Store on my server from my client using Authorization Manager. I'm providing all the errors because I have a feeling they root from the same problem. Does anybody see anything I missed?

    Read the article

  • SQL SERVER – 2011 – Clipboard Ring – CTRL+SHIFT+V

    - by pinaldave
    While I was writing my earlier post SQL SERVER – 2011 – Multi-Monitor SSMS Windows, I found out that there is one more similar feature which existed in Visual Studio is also now part of SQL Server 2011 (Denali). The feature is called clipboard ring feature. This is how it works. Select Multiple object one by one using regular CTRL + X. Now instead of pasting using CTRL+V use CTRL+SHIFT+V. Well, you will see that that pasted value is rotating based on what you have earlier selected in CTRL+V. I was really happy as I think this is one of the feature of VS, I really wanted SSMS to have. Try it and let me know what you think of the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Server Security

    - by pinaldave
    This blog post is inspired from Beginning SQL Joes 2 Pros: The SQL Hands-On Guide for Beginners – SQL Exam Prep Series 70-433 – Volume 1. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Server Security – A Primer. In the article we discussed various basics terminology of the security. The article further covers following important concepts of security. Granting Permissions Denying Permissions Revoking Permissions Above three are the most important concepts related to security and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) If you granted Phil control to the server, but denied his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 2) If you granted Phil control to the server and revoked his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 3) You have a login named James who has Control Server permission. You want to elimintate his ability to create databases without affecting any other permissions. What SQL statement would you use? ALTER LOGIN James DISABLE DROP LOGIN James DENY CREATE DATABASE To James REVOKE CREATE DATABASE To James GRANT CREATE DATABASE To James Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 1 3) 3 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • SQL SERVER – Denali Feature – Zoom Query Editor

    - by pinaldave
    SQL Server next version ‘Denali’ is coming up with very neat feature which can be used while presentations, group discussion or for people who prefers large fonts. I have increased the font size to 400 percentage and for the same reason they are very large. You can adjust the font size which is convenient to you. One more reason to go for next version of SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video

    - by pinaldave
    You might have heard many times that one should not use SELECT * as there are many disadvantages to the usage of the SELECT *. I also believe that there are always rare occasion when we need every single column of the query. In most of the cases, we only need a few columns of the query and we should retrieve only those columns. SELECT * has many disadvantages. Let me list a few and remaining you can add as a comment.  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug. There are two quick tricks I have discussed in the video which explains how users can avoid using SELECT * but instead list the column names. 1) Drag the columns folder from SQL Server Management Studio to Query Editor 2) Right Click on Table Name >> Script TAble AS >> SELECT To… >> Select option It is extremely easy to list the column names in the table. In today’s sixty seconds video, you will notice that I was able to demonstrate both the methods very quickly. From now onwards there should be no excuse for not listing ColumnName. Let me ask a question back – is there ever a reason to SELECT *? If yes, would you please share that as a comment. More on SELECT *: SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – SELECT vs. SET Performance Comparison I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – The Story of a Lesser Known Startup Parameter in SQL Server – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. During my last session in SQL Bangalore User Group (Facebook) meeting, I was lucky enough to deliver a session on SQL Server Startup issue. The name of the session was “SQL Engine Starting Trouble – How to start?” From the feedback, I realized that one of the “not well known” startup parameter is “-m”. Okay, you might say “I know that this is used to start the SQL in single user mode”. But what you might not know is that you can pass a string with -m which has special meaning and use. I have used this parameter in my blog here but looks like not many of you have seen that. It happens most of the time when we want to start SQL Server in single user mode, someone else makes connection before you can. The only choice you have is to repeat same process again till you succeed. Some smart DBAs may disable the remote network protocols (TCP/IP and Named Pipes) of SQL Instance and allow only local connections to SQL. Once the activity is complete, our dear smart DBA has to remember to re-enable network protocols. Sometimes, it may be a local service or application getting connection to SQL before we can. There is a better way to deal with it. Yes, you have guessed it correctly: -m parameter which a string. Since I work with SQL Product Support team, I may know little more undocumented commands and parameters, but this is not an undocumented stuff. It’s already documented in books online. So in this blog, I am going to show a demo of its usage. As documentation shows, “Do not use this option as a security feature.” So please read this blog as knowledge enhancer and troubleshooting issues not security feature. In my laptop, I have a default instance of SQL Server 2012 and here is what we would in the configuration manager. Now, I would go ahead and stop SQL Service by selecting SQL Server (MSSQLServer) > Right Click > Stop. There are multiple ways to start SQL with startup parameter. 1) Use Net Start Command from command prompt Net Start MSSQLServer /mSQLCMD The above command is the simplest way to add startup parameter to SQL. This parameter would be cleared once we stop and start SQL. 2) Add Startup Parameter via configuration manager. Step is already listed here. We need to add -mSQLCMD If we compare 1 and 2, it’s clear that unless we modify startup parameter and remove -m, it would be in effect. 3) Start SQL Service via command line SQLServr.exe –mSQLCMD –s<InstanceName> Wait, what does SQLCMD mean with /m? It’s the instruction to SQL that start SQL Server in Single User Mode and allow only the application which is SQLCMD. Any other application would fail with Login Failed for User Error message. It would be important to note that string is case sensitive. This value should be picked up from application_name column from sys.dm_exec_sessions. I have made a connection using SQLCMD and as we can see it comes as upper case “SQLCMD”. If we want only management studio query windows to connect then we need to give -m” Microsoft SQL Server Management Studio – Query” as startup parameter. In below example, I have given it as SQLCMd (lower case d at the end) and we would notice that we would not be able to connect to SQL Instance. Above proves that parameter works as expected and it’s case sensitive. Error Log would show below information. How to get error log location? I have already blogged about it. Hope you have learned something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Server 2012 content on Channel 9

    - by jamiet
    A mountain of SQL Server 2012 video content featuring Greg Low, Jonathan Kehayias, Joe Sack and Roger Doherty has just been released on Channel 9. Channel 9 has great support for tags and RSS feeds so if you want to automatically download all of that content simply you can add the following RSS feed: http://channel9.msdn.com/Tags/sql+server+2012/RSS to your podcast reader of choice and have fun learning about all the new features in SQL Server 2012 such as: AlwaysOn Power View SSDT SSRS Data Alerts SSAS Tabular Modelling DAX Improvements MDS improvements SSIS improvements DQS StreamInsight improvements Data-Tier Apps (DACs) LocalDB FileTable Spatial improvements T-SQL paging Distributed Replay XEvents improvements ADO.Net Code-first T-SQL improvements Server roles Partitioning improvements ColumnStore Whew, quite a list! @jamiet

    Read the article

  • SQL Server 2012 content on Channel 9

    - by jamiet
    A mountain of SQL Server 2012 video content featuring Greg Low, Jonathan Kehayias, Joe Sack and Roger Doherty has just been released on Channel 9. Channel 9 has great support for tags and RSS feeds so if you want to automatically download all of that content simply you can add the following RSS feed: http://channel9.msdn.com/Tags/sql+server+2012/RSS to your podcast reader of choice and have fun learning about all the new features in SQL Server 2012 such as: AlwaysOn Power View SSDT SSRS Data Alerts SSAS Tabular Modelling DAX Improvements MDS improvements SSIS improvements DQS StreamInsight improvements Data-Tier Apps (DACs) LocalDB FileTable Spatial improvements T-SQL paging Distributed Replay XEvents improvements ADO.Net Code-first T-SQL improvements Server roles Partitioning improvements ColumnStore Whew, quite a list! @jamiet

    Read the article

  • SQL SERVER – Difference Between CURRENT_TIMESTAMP and GETDATE() – CURRENT_TIMESTAMP Equivalent in SQL Server

    - by pinaldave
    A common question – I often get from Oracle/MySQL Professionals: “What is the Equivalent to CURRENT_TIMESTAMP in SQL Server?” Here is a common question I often get from SQL Server Professionals: “What are differences between Difference Between CURRENT_TIMESTAMP and GETDATE ()?” Very simple question but have showed up so frequently that I feel like to write about it. Well in SQL Server GETDATE() is Equivalent to CURRENT_TIMESTAMP. However, if you use CURRENT_TIMESTAMP in your select statement it will work fine. You can see in the above example – both of them returns the same value. Now let us go to next question regarding difference between GETDATE and CURRENT_TIMESTAMP. Well, the matter of the fact, there is no difference between them in SQL Server (Reference Link). CURRENT_TIMESTAMP is an ANSI SQL function, whereas GETDATE is T-SQL implementation of the same function. Both of them derive value from the operating system of the computer on which SQL Server instance is running. Above discussion prompts another question – in this case, what should one use GETDATE or CURRENT_TIMESTAMP? Well, this is indeed tricky and interesting question. I think I am very comfortable using the GETDATE () so I will go to use it but a matter of the fact there is no right or wrong answer. If you want to follow ancient saying “When in Rome, do as the Romans do”, I suggest using the GETDATE (), or continue using CURRENT_TIMESTAMP. With that said, there is one very important property we all need to keep in mind. If you use CURRENT_TIMESTAMP while creating an object, they are automatically converted to GETDATE() and stored internally. To illustrate what I am suggesting here is the example - Create a table using the following script CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (CURRENT_TIMESTAMP) FOR [Cold2] GO Now go to SSMS and generate the script for the table and you will notice following syntax. CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (GETDATE()) FOR [Cold2] GO You can notice that SQL Server have automatically converted CURRENT_TIMESTAMP to GETDATE(). I guess this gives us an idea how they behave. Now go ahead and make your choice! Do let me know which one will you use CURRENT_TIMESTAMP or GETDATE () in the comments area. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • HP E200i Controlller RAID Configuration fon Win2008 Ent, Sql Server, IIS Apps etc need opinion

    - by mn
    Hello, Actually I run RAID 5 (4 x SAS drives) with Win 2008 Ent(1x host) Win 2008 End(3x guest) Sql Server 2005 Std (on guest) 3 x asp.net applications (on guest) I bought 3 x drives to create additional array (on same controller E200i, I am waiting now for confirmation is it possible to have 3 raids in same controller) I am planning to have 2 x RAID5 (if it is possible) first RAID 5 with all vhd files, systems etc second RAID 5 all data files and transaction logs I am looking for opinion how to optimize data layer (seven drives, one controller).

    Read the article

  • Separate 2 networks with 1 Windows Server

    - by SamuGG
    The situation is: I have 1 router 192.168.1.1, 1 switch, 1 windows server and a basic LAN of devices accessing it. I need to split into 2 separate LANs with full Internet access each, but isolated from each other. Given that, the server is a Windows Server 2008 R2 with 2 NICs: NIC1: 192.168.1.2 NIC2: 192.168.2.2 The router has no dhcp configuration. Please, can anyone explain gracefully, step by step, what do I need to do? What would be the 2 NICs full configuration? What services do I need to install? I don't want devices on either network to see devices on the other network, they must be completely separate. I guess I'm missing the routing procedure step, but I have no idea how is that done. For example: tell the server that devices with gateway 192.168.2.2 must send traffic for internet to 192.168.1.1 router. Thanks in advance.

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Solution to Puzzle – Simulate LEAD() and LAG() without Using SQL Server 2012 Analytic Function

    - by pinaldave
    Earlier I wrote a series on SQL Server Analytic Functions of SQL Server 2012. During the series to keep the learning maximum and having fun, we had few puzzles. One of the puzzle was simulating LEAD() and LAG() without using SQL Server 2012 Analytic Function. Please read the puzzle here first before reading the solution : Write T-SQL Self Join Without Using LEAD and LAG. When I was originally wrote the puzzle I had done small blunder and the question was a bit confusing which I corrected later on but wrote a follow up blog post on over here where I describe the give-away. Quick Recap: Generate following results without using SQL Server 2012 analytic functions. I had received so many valid answers. Some answers were similar to other and some were very innovative. Some answers were very adaptive and some did not work when I changed where condition. After selecting all the valid answer, I put them in table and ran RANDOM function on the same and selected winners. Here are the valid answers. No Joins and No Analytic Functions Excellent Solution by Geri Reshef – Winner of SQL Server Interview Questions and Answers (India | USA) WITH T1 AS (SELECT Row_Number() OVER(ORDER BY SalesOrderDetailID) N, s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663)) SELECT SalesOrderID,SalesOrderDetailID,OrderQty, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY N/2) END LeadVal, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY N/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) END LagVal FROM T1 ORDER BY SalesOrderID, SalesOrderDetailID, OrderQty; GO No Analytic Function and Early Bird Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription -- a query to emulate LEAD() and LAG() ;WITH s AS ( SELECT 1 AS ldOffset, -- equiv to 2nd param of LEAD 1 AS lgOffset, -- equiv to 2nd param of LAG NULL AS ldDefVal, -- equiv to 3rd param of LEAD NULL AS lgDefVal, -- equiv to 3rd param of LAG ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLd.SalesOrderDetailID, s.ldDefVal) AS LeadValue, ISNULL( sLg.SalesOrderDetailID, s.lgDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLd ON s.row = sLd.row - s.ldOffset LEFT OUTER JOIN s AS sLg ON s.row = sLg.row + s.lgOffset ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and Partition By Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription /* a query to emulate LEAD() and LAG() */ ;WITH s AS ( SELECT 1 AS LeadOffset, /* equiv to 2nd param of LEAD */ 1 AS LagOffset, /* equiv to 2nd param of LAG */ NULL AS LeadDefVal, /* equiv to 3rd param of LEAD */ NULL AS LagDefVal, /* equiv to 3rd param of LAG */ /* Try changing the values of the 4 integer values above to see their effect on the results */ /* The values given above of 0, 0, null and null behave the same as the default 2nd and 3rd parameters to LEAD() and LAG() */ ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLead.SalesOrderDetailID, s.LeadDefVal) AS LeadValue, ISNULL( sLag.SalesOrderDetailID, s.LagDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLead ON s.row = sLead.row - s.LeadOffset /* Try commenting out this next line when LeadOffset != 0 */ AND s.SalesOrderID = sLead.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LEAD() function */ LEFT OUTER JOIN s AS sLag ON s.row = sLag.row + s.LagOffset /* Try commenting out this next line when LagOffset != 0 */ AND s.SalesOrderID = sLag.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LAG() function */ ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and CTE Usage Excellent Solution by Pravin Patel - Winner of SQL Server Interview Questions and Answers (India | USA) --CTE based solution ; WITH cteMain AS ( SELECT SalesOrderID, SalesOrderDetailID, OrderQty, ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS sn FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, sLead.SalesOrderDetailID AS leadvalue, sLeg.SalesOrderDetailID AS leagvalue FROM cteMain AS m LEFT OUTER JOIN cteMain AS sLead ON sLead.sn = m.sn+1 LEFT OUTER JOIN cteMain AS sLeg ON sLeg.sn = m.sn-1 ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty No Analytic Function and Co-Related Subquery Usage Excellent Solution by Pravin Patel – Winner of SQL Server Interview Questions and Answers (India | USA) -- Co-Related subquery SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, ( SELECT MIN(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID >= m.SalesOrderID AND l.SalesOrderDetailID > m.SalesOrderDetailID ) AS lead, ( SELECT MAX(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID <= m.SalesOrderID AND l.SalesOrderDetailID < m.SalesOrderDetailID ) AS leag FROM Sales.SalesOrderDetail AS m WHERE m.SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty This was one of the most interesting Puzzle on this blog. Giveaway Winners will get following giveaways. Geri Reshef and Pravin Patel SQL Server Interview Questions and Answers (India | USA) DHall Pluralsight 30 days Subscription Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • How to deal with configuration style warnings occuring from TexLive 2012 installation?

    - by JJD
    I followed the advice of izx on how to install TexLive 2012 using the texlive-backports PPA. Before I started I removed all TexLive-related packages. The installation finished and everything seems to work fine. The only thing I noticed are some warnings in the output of the installer. Here is an excerpt of the output: Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg There are more of that kind in the rest of the output: $ sudo apt-get install texlive Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym Suggested packages: texlive-doc-en purifyeps chktex latexmk dvipng xindy dvidvi fragmaster lacheck latexdiff t1utils The following NEW packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym 0 upgraded, 29 newly installed, 0 to remove and 17 not upgraded. Need to get 0 B/274 MB of archives. After this operation, 450 MB of additional disk space will be used. Do you want to continue [Y/n]? Preconfiguring packages ... Selecting previously unselected package tex-common. (Reading database ... 290206 files and directories currently installed.) Unpacking tex-common (from .../tex-common_3.13~ubuntu12.04.1_all.deb) ... Selecting previously unselected package lmodern. Unpacking lmodern (from .../lmodern_2.004.1-5~precise1_all.deb) ... Selecting previously unselected package tex-gyre. Unpacking tex-gyre (from .../tex-gyre_2.004.1-4~precise1_all.deb) ... Selecting previously unselected package libgraphite3. Unpacking libgraphite3 (from .../libgraphite3_1%3a2.3.1-0.2build1_amd64.deb) ... Selecting previously unselected package libkpathsea6. Unpacking libkpathsea6 (from .../libkpathsea6_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package libptexenc1. Unpacking libptexenc1 (from .../libptexenc1_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-common. Unpacking texlive-common (from .../texlive-common_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-binaries. Unpacking texlive-binaries (from .../texlive-binaries_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-doc-base. Unpacking texlive-doc-base (from .../texlive-doc-base_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-base. Unpacking texlive-base (from .../texlive-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base. Unpacking texlive-latex-base (from .../texlive-latex-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended. Unpacking texlive-latex-recommended (from .../texlive-latex-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package latex-xcolor. Unpacking latex-xcolor (from .../latex-xcolor_2.11-1_all.deb) ... Selecting previously unselected package pgf. Unpacking pgf (from .../archives/pgf_2.10-1_all.deb) ... Selecting previously unselected package latex-beamer. Unpacking latex-beamer (from .../latex-beamer_3.10-1_all.deb) ... Selecting previously unselected package texlive-generic-recommended. Unpacking texlive-generic-recommended (from .../texlive-generic-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks. Unpacking texlive-pstricks (from .../texlive-pstricks_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package prosper. Unpacking prosper (from .../prosper_1.00.4+cvs.2007.05.01-4_all.deb) ... Selecting previously unselected package ps2eps. Unpacking ps2eps (from .../ps2eps_1.68-1_amd64.deb) ... Selecting previously unselected package ttf-marvosym. Unpacking ttf-marvosym (from .../ttf-marvosym_0.1+dfsg-2_all.deb) ... Selecting previously unselected package texlive-fonts-recommended. Unpacking texlive-fonts-recommended (from .../texlive-fonts-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive. Unpacking texlive (from .../texlive_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-extra-utils. Unpacking texlive-extra-utils (from .../texlive-extra-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-font-utils. Unpacking texlive-font-utils (from .../texlive-font-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-fonts-recommended-doc. Unpacking texlive-fonts-recommended-doc (from .../texlive-fonts-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base-doc. Unpacking texlive-latex-base-doc (from .../texlive-latex-base-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended-doc. Unpacking texlive-latex-recommended-doc (from .../texlive-latex-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks-doc. Unpacking texlive-pstricks-doc (from .../texlive-pstricks-doc_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package tipa. Unpacking tipa (from .../tipa_2%3a1.3-17~precise1_all.deb) ... Processing triggers for doc-base ... Processing 5 added doc-base files... Registering documents with scrollkeeper... Processing triggers for man-db ... Processing triggers for fontconfig ... Processing triggers for install-info ... Setting up tex-common (3.13~ubuntu12.04.1) ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call texlive-base is not ready, skipping fmtutil-sys --all call Setting up lmodern (2.004.1-5~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tex-gyre (2.004.1-4~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up libgraphite3 (1:2.3.1-0.2build1) ... Setting up libkpathsea6 (2012.20120628-1~ubuntu12.04.1) ... Setting up libptexenc1 (2012.20120628-1~ubuntu12.04.1) ... Setting up texlive-common (2012.20120611-3~ubuntu12.04.1) ... Setting up texlive-binaries (2012.20120628-1~ubuntu12.04.1) ... update-alternatives: using /usr/bin/xdvi-xaw to provide /usr/bin/xdvi.bin (xdvi.bin) in auto mode. update-alternatives: using /usr/bin/bibtex.original to provide /usr/bin/bibtex (bibtex) in auto mode. mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Building format(s) --refresh. This may take some time... done. Setting up texlive-doc-base (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up ps2eps (1.68-1) ... Setting up ttf-marvosym (0.1+dfsg-2) ... Setting up texlive-fonts-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-pstricks-doc (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call Setting up texlive-base (2012.20120611-3~ubuntu12.04.1) ... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. /usr/bin/tl-paper: setting paper size for dvips to a4. /usr/bin/tl-paper: setting paper size for dvipdfmx to a4. /usr/bin/tl-paper: setting paper size for xdvi to a4. /usr/bin/tl-paper: setting paper size for pdftex to a4. Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all. This may take some time... done. Processing triggers for tex-common ... Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-generic-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-fonts-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-extra-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-font-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all --cnffile /etc/texmf/fmt.d/10texlive-latex-base.cnf. This may take some time... done. Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-pstricks (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tipa (2:1.3-17~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up prosper (1.00.4+cvs.2007.05.01-4) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Setting up texlive (2012.20120611-3~ubuntu12.04.1) ... Setting up latex-xcolor (2.11-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Setting up pgf (2.10-1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Setting up latex-beamer (3.10-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Processing triggers for libc-bin ... ldconfig deferred processing now taking place What exactly is 10lmodern.cfg good for? How can I prevent this warnings? Here is the output of sudo update-updmap: $ sudo update-updmap Regenerating '/var/lib/texmf/updmap.cfg-DEBIAN'... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg done. Regenerating '/var/lib/texmf/updmap.cfg-TEXLIVEDIST'... done. update-updmap has updated the following file(s): /var/lib/texmf/updmap.cfg-DEBIAN /var/lib/texmf/updmap.cfg-TEXLIVEDIST If you want to enable the map files with this new file, you should run updmap-sys or updmap.

    Read the article

  • OLTP Sql Server RAID configuration with 10 disks, allocation Unit and disk stripe size

    - by Chris Wood
    On a new db server I only have 10 disks to play with, The usage is about a booking every 3-5 seconds, so not high volume, I know compromises have to be made, but my initial thoughts are - DISK 1 & 2 - RAID 1 - OS DISKS 3,4,5,6 - RAID 10 - Data, Indexes & TempDB DISKS 7,8,9,10 - RAID 10 - Logs & Backup Full backups will take place when there is virtually no traffic on the website so not bothered about the contention with the logs. disk 3-10 - 8kb NTFS unit allocation size disk 3-10 - 64kb Disk Stripe size does this seems to be sensible, any other considerations I have omitted ? thanks

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Will this increase my Virtual private Server failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange << should live inside MWS2008 ? or standalone without OS? install Team foundation server << should live inside MWS2008 ? or standalone without OS? on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler to measure for this about one VPS can only install what what what service ? because of my resource is limited =.@ may i know if it is install in this way,maybe it seem like defeat the way of "VPS"... what will happen ? or any guideline on this issue (fully configuring the window server 2008 R2) ? Thx for reply

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >