Search Results

Search found 10442 results on 418 pages for 'blog'.

Page 104/418 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • SQL Down Under Podcast 50 - Guest Louis Davidson now online

    - by Greg Low
    Hi Folks,I've recorded an interview today with SQL Server MVP Louis Davidson. In it, Louis discusses some of his thoughts on database design and his latest book.You'll find the podcast here: http://www.sqldownunder.com/Resources/Podcast.aspxAnd you'll find his latest book (Pro SQL Server 2012 Relational Database Design and Implementation) here: http://www.amazon.com/Server-Relational-Database-Implementation-Professional/dp/1430236957/ref=sr_1_2?ie=UTF8&qid=1344997477&sr=8-2&keywords=louis+davidsonEnjoy!

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Book: Pro SQL Server 2008 Service Broker: Klaus Aschenbrenner

    - by Greg Low
    I've met Klaus a number of times now and attended a few of his sessions at conferences. Klaus is doing a great job of evangelising Service Broker. I wish the SQL Server team would give it as much love. Service Broker is a wonderful technology, let down by poor resourcing. Microsoft did an excellent job of building the plumbing for this product in SQL Server 2005 but then provided no management tools and no prescriptive guidance. Everyone then seemed surprized that the takeup of it was slow. I even...(read more)

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Editing service for blogger with terrible English grammar

    - by Josh Moore
    I would like to write a technical blog. However, the biggest things holding me back is my poor spelling, punctuation, and grammar (I have all these problems even though I am a native English speaker). I am thinking about using a professional editing/proofreading service to fix my blog posts before I post them. However, given the content will be technical in nature (some articles will get into details of programming) and I would like to write them in markdown, I am not sure if the general online services will be a good fit. Can you recommend a editor (or company) that you like that can provide this service?

    Read the article

  • Table or index that goes nowhere

    - by Linchi Shea
    SQL Server allows you to create a table or an index on a filegroup that has no file assigned to it. Because there is no data file to hold anything, the table or the index thus created cannot be used. This may not be a problem because often you would probably use the table or the index 'immeidately', and would realize the problem. Well, you wouldn't be able to go anywhere. But there are cases, especially with an index, where the problem may not be discovered until some time later, and that could cause...(read more)

    Read the article

  • Connect Spotlight: Rename Instance Name

    - by Lara Rubbelke
    Every now and then customers ask me how they can suggest changes or behavior changes to SQL Server. Many of us are aware of Connect , where you can add feature recommendations and vote on other people’s suggestions. There are a LOT of recommendations, and I know Microsoft values your feedback and suggestions. Sometimes these recommendations are grand, and others are small – in either case, your votes do make a difference on how Microsoft prioritizes features and changes in future releases. Recently,...(read more)

    Read the article

  • Rules of Holes #5: Seek Help to Get Out of the Hole

    - by ArnieRowland
    You are moving along, doing good work, maintaining a steady pace. All seems to be going well for you. Then BAM!, a Hole just grabbed you. How the heck did that happen? What went wrong? How did you fall into a Hole? Definitely, you will want to do a post-mortem and try to tease out what misteps led you into the Hole. Certainly you will want to use this opportunity to enhance your Hole avoidance skills. But your first priority is to get out of this Hole right NOW.. Consider the Fifth Rule of Holes...(read more)

    Read the article

  • links for 2011-02-10

    - by Bob Rhubart
    Manish Devgan: Extending WebCenter Spaces Using JDeveloper In addition to being able to customize WebCenter Spaces using the browser-based tools, you can now also customize and “extend” WebCenter Spaces in many ways in JDeveloper.  (tags: oracle enterprise2.0 webcenter jdeveloper) Oracle University: New Personalized Training Catalog "Searching for training classes just got easier with Oracle University's new Personalized Training Catalog. View upcoming course schedules for the topics that you select in your preferred locations. Browse courses when you need to or request your personalized catalog to be emailed to you." (tags: oracle oracleuniversity) René van Wijk: Hibernate and Coherence « Middleware Magic "A major justification for the claim that applications using an object/relational persistence layer are expected to outperform applications built using direct JDBC is the potential for caching." - René van Wijk (tags: oracle coherence middleware) Sten Vesterli on Fusion Applications: " It’s (almost) here!" Speaking of Fusion Applications, Oracle ACE Director Sten Vesterli says: "The usability revolution has finally caught up with enterprise applications; they will no longer be built based on the capabilities of the database, but on the needs of users." (tags: oracle otn oracleace fusionapplications) The Myth of Oracle Fusion | The ORACLE-BASE Blog "I can totally understand when people on the outside of our little goldfish bowl have a really bad and confused impression of anything containing the term “Fusion”, because it does have a very long and sordid history." Oracle ACE Director Tim Hall (tags: oracle otn oracleace fusionapplications) The Other Side of XBRL (Enterprise Performance Management Blog) With the United States SEC's mandate for XBRL filings entering its third year, and impacting over 7000 additional companies in 2011, there's a lot of buzz in the industry about how companies should address the new reporting requirements. (tags: oracle xbrl compliance) Database Vault integration available (The Shorten Spot) Anthony Shorten shares information on the Database Vault solution included in the Oracle Utilities Application Framework. (tags: oracle database) SOASuite 11.1.1.4 : Error Logging into BPM11g Composer? (Angelo Santagata's Blog) Angelo Santagata shares simple solutions to a few minor SOA Suite 11.1.1.4 issues. (tags: oracle soa soasuite bpm) Thierry Vergult: No electricity, but the application is up "Dakar is having more troubles then normal with electricity. Never thought that the SaaS model would be that useful when the light goes out. And the extra battery in the office dies, and the router goes down. But you still can access the application over your smartphone and finish your payroll run." (tags: oracle cloud saas)

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • Embracing Community

    - by Chris Williams
    I just put the finishing touches on another article for my Code Magazine column: Embracing Community. You won't see this one until around July, but it focuses on a subject near and dear to my heart: Code Camps! At the end of the article, I mention that I'm interested in hearing some of your war stories about community and what you do to be a part of it. I'll be talking to people at Tech Ed 2010 and Codestock, but I would also like to hear from some of you that read this blog. If you have an interesting story to share, drop me a line (via this blog) and tell me about it. You never know, it just might end up in my column.

    Read the article

  • An XEvent a Day (28 of 31) – Tracking Page Compression Operations

    - by Jonathan Kehayias
    The Database Compression feature in SQL Server 2008 Enterprise Edition can provide some significant reductions in storage requirements for SQL Server databases, and in the right implementations and scenarios performance improvements as well.  There isn’t really a whole lot of information about the operations of database compression that is documented as being available in the DMV’s or SQL Trace.  Paul Randal pointed out on Twitter today that sys.dm_db_index_operational_stats() provides...(read more)

    Read the article

  • A proposal for #DAX Code Formatting #ssas #powerpivot #tabular

    - by Marco Russo (SQLBI)
    I recently published a set of rules for DAX code formatting. The following is an example of what I obtain: CALCULATE (     SUMX (         Orders,         Orders[Amount]     ),     FILTER (         ALL ( Customers ),         CALCULATE (             COUNTROWS ( Sales ),             ALL ( Calendar[Date] )         ) > 42 + 8 – 25 * ( 3 - 1 )             + 2 – 1 + 2 – 1             + CALCULATE (                   2 + 2 – 2                   + 2 - 2               )             – CALCULATE ( 4 )     ) ) The goal is to improve code readability and I look forward to implement a code formatting feature in DAX Studio. The DAX Editor already supports the rules described in the article. I am also considering whether to add a rule specific for ADDCOLUMNS / SUMMARIZE because I would like to see the “pairs” of arguments to define a column in the same row or with a special indentation rule (DAX expression for a column is indented in the line following the column name). EVALUATE CALCULATETABLE (        CALCULATETABLE (         SUMMARIZE (             Audience,             'Date'[Year],             Individuals[Gender],             Individuals[AgeRange],             "Num of Rows", FORMAT (COUNTROWS (Audience), "#,#"),             "Weighted Mean Age",                 SUMX (Audience, Audience[Weight] * Audience[Age]) / SUM (Audience[Weight])         ),         SUMMARIZE (             BridgeIndividualsTargets,             Individuals[ID_Individual]         ),         Audience[Weight] > 0        ),        Targets[Target] = "Maschi",     'Date'[Year] = 2010,     'Date'[MonthName] = "January" ) I would like to get feedback for that – you can use comments here or comments in original article. Thanks!

    Read the article

  • Two BULK INSERT issues I worked around recently

    - by AaronBertrand
    Since I am still afraid of SSIS, and because I am dealing mostly with CSV files and table structures that are relatively simple and require only one of the three letters in the acronym "ETL," I find myself using BULK INSERT a lot. I have been meaning to switch to using CLR, since I am doing a lot of file system querying using xp_cmdshell, but I haven't had the chance to really explore it yet. I know, a lot of you are probably thinking, wow, look at all those bad habits. But for every person thinking...(read more)

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – and Remus Rusanu’s session on Service Broker to process NAP data. I want to send out heartfelt thanks to the generous...(read more)

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • T-SQL Tuesday : Reflections on the PASS Summit and our community

    - by AaronBertrand
    Last week I attended the PASS Summit in Seattle. I blogged from both keynotes ( Keynote #1 and Keynote #2 ), as well as the WIT Luncheon - which SQL Sentry sponsored. I had a fantastic time at the conference, even though these days I attend far fewer sessions that I used to. As a company, we were overwhelmed by the positive energy in the Expo Hall. I really liked the notebook idea, where board members were assigned notebooks to carry around and take ideas from attendees. I took full advantage when...(read more)

    Read the article

  • Geek City: Preparing for the SQL Server Master Exam

    - by Kalen Delaney
    I was amazed at the results when I just did a search of SQLBlog, and realized no one had really blogged here about the changes to the Microsoft Certified Master (MCM) program. Greg Low described the MCM program when he decided to pursue the MCM at the end of 2008, but two years later, at the end of 2010, Microsoft completely changed the requirements. Microsoft published the new requirements here . The three week intensive course is no longer required, but that doesn't mean you can just buy an exam...(read more)

    Read the article

  • SQL Server 2014 Cumulative Update #3 is Available

    - by AaronBertrand
    Microsoft has released Cumulative Update #3 for SQL Server 2014. Important! This Cumulative Update includes MS14-044, which I blogged about here and also mention here . KB Article: KB #2984923 32 fixes listed publicly at time of publication Build number is 12.0.2402 Relevant for @@VERSION 12.0.2000 through 12.0.2401 (And no, they still haven't fixed the license terms screen; it still makes it seem like an update for SQL Server 2014 Service Pack 1, which doesn't exist yet.)...(read more)

    Read the article

  • 32-bit ODBC on Windows Server 2008 R2

    - by John Paul Cook
    Heterogeneous data access requires having the right drivers. If you have to use 32-bit ODBC drivers, you won’t find then when you start the Microsoft ODBC Administrator because it is 64-bit. The 32-bit ODBC Administrator is found here: C:\Windows\SysWOW64\odbcad32.exe You might want to make a shortcut for it to make it easy to find. You’ll need to use it when make 32-bit ODBC data connections. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • The PASS Board of Directors Q&A Session

    - by andyleonard
    Friday afternoon (18 Oct 2013), the PASS Board of Directors met with interested members of the SQL Server Community to answer questions. Paraphrases of some questions and notes I collected during the session follow (Please note: this is not a transcript): Elections Kendall Van Dyke asked about duplicate voting. The Board responded that they had looked into the matter and identified duplicate memberships based on names and addresses, but with different email addresses. After filtering for duplicate...(read more)

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • Live from the #summit13 keynote : 2013-10-17

    - by AaronBertrand
    Douglas McDowell (EVP Finance) takes the stage (no kilt), and talks numbers. PASS has an impressive $1MM in reserves as a "rainy day" fund. Last fiscal year they spent $7.6MM on community; 30% of that internationally. Bill Graziano comes on (no kilt) to say goodbye and thanks to the outgoing board members, Douglas McDowell, Rob Farley and Rushabh Mehta. Thomas LaRock comes on. No kilt , but he did tuck his shirt in . He introduces the incoming executive team. The 2014 PASS Business Analytics Conference...(read more)

    Read the article

  • SSMS hanging without error when connecting to SQL

    - by Rob Farley
    Scary day for me last Thursday. I had gone up to Brisbane, and was due to speak at the Queensland SQL User Group on Thursday night. Unfortunately, disaster struck about an hour beforehand. Nothing to do with the recent floods (although we were meeting in a different location because of them). It was actually down to the fact that I’d been fiddling with my machine to get Virtual Server running on Windows 7, and SQL had finally picked up a setting from then. I could run Management Studio, but it couldn’t connect at all. No error, it just seemed to hang. One of the things you have to do to get Virtual Server installed is to tweak the Group Policy settings. I’d used gpupdate /force to get Windows to pick up the new setting, which allowed me to get Virtual Server running properly, but at the time, SQL was still using the previous settings. Finally when in Brisbane, my machine picked up the new settings, and caused me pain. Dan Benediktson describes the situation. If the SQL client picks up the wrong value out of the GetOverlappedResult API (which is required for various changes in Windows 7 behaviour), then Virtual Server can be installed, but SQL Server won’t allow connections. Yay. Luckily, it’s easy enough to change back using the Group Policy editor (gpedit.msc). Then restarting the machine (again!, as gpupdate /force didn’t cut it either, because SQL had already picked up the value), and finally I could reconnect. On Thursday I simply borrowed another machine for my talk. Today, one of my guys had seen and remembered Dan’s post. Thanks, both of you.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >