Search Results

Search found 41053 results on 1643 pages for 'database unit testing'.

Page 42/1643 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Free Book from Microsoft - Testing for Continuous Delivery with Visual Studio 2012

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/16/free-book-from-microsoft---testing-for-continuous-delivery-with.aspxAt  http://msdn.microsoft.com/en-us/library/jj159345.aspx, Microsoft have made available a free e-book - Testing for Continuous Delivery with Visual Studio 2012 "As more software projects adopt a continuous delivery cycle, testing threatens to be the bottleneck in the process. Agile development frequently revisits each part of the source code, but every change requires a re-test of the product. While the skills of the manual tester are vital, purely manual testing can't keep up. Visual Studio 2012 provides many features that remove roadblocks in the testing and debugging process and also help speed up and automate re-testing."

    Read the article

  • Database vs Networking

    - by user16258
    I have completed my diploma in (IT) and now pursuing degree, i am in last semester of my B.E(I.T). I want to do specialization either in Database(oracle) or in Networking(cisco). Which one of two will be in more demand in near future, i know it's all about interest but still i would like to know your opinion. Most of people say that a network engineer is never paid as better as a programmer or a DBA, and few says they do get paid well. What would be the scope if i clear my CCNA and CCNP exams, or either OCA & OCP exams, what would be more rewarding. Also i have read somewhere that most of the task of DBA will be automated so in future demand of a DBA will reduce. I would also like to hear from Network engineers what's the scenario out there in India. Thanks

    Read the article

  • TDE Tablespace Encryption 11.2.0.1 Certified with EBS 11i

    - by Steven Chan
    Oracle Advanced Security is an optional licenced Oracle 11g Database add-on.  Oracle Advanced Security Transparent Data Encryption (TDE) offers two different features:  column encryption and tablespace encryption.  TDE Tablespace Encryption 11.2.0.1 is now certified with Oracle E-Business Suite Release 11i. What is Transparent Data Encryption (TDE) ? Oracle Advanced Security Transparent Data Encryption (TDE) allows you to protect data at rest. TDE helps address privacy and PCI requirements by encrypting personally identifiable information (PII) such as Social Security numbers and credit card numbers. TDE is completely transparent to existing applications with no triggers, views or other application changes required. Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated and passed all authorization checks. Authorization checks include verifying the user has the necessary select and update privileges on the application table and checking Database Vault, Label Security and Virtual Private Database enforcement policies.

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • Designing persistence schema for BigTable on AppEngine

    - by Vitalij Zadneprovskij
    I have tried to design the datastore schema for a very small application. That schema would have been very simple, if not trivial, using a relational database with foreign keys, many-to-many relations, joins, etc. But the problem was that my application was targeted for Google App Engine and I had to design for a database that was not relational. At the end I gave up. Is there a book or an article that describes design principles for applications that are meant for such databases? The books that I have found are about programming for App Engine and they don't spend many words about database design principles.

    Read the article

  • Review: Backbone.js Testing

    - by george_v_reilly
    Title: Backbone.js Testing Author: Ryan Roemer Rating: $stars(4.5) Publisher: Packt Copyright: 2013 ISBN: 178216524X Pages: 168 Keywords: programming, testing, javascript, backbone, mocha, chai, sinon Reading period: October 2013 Backbone.js Testing is a short, dense introduction to testing JavaScript applications with three testing libraries, Mocha, Chai, and Sinon.JS. Although the author uses a sample application of a personal note manager written with Backbone.js throughout the book, much of the material would apply to any JavaScript client or server framework. Mocha is a test framework that can be executed in the browser or by Node.js, which runs your tests. Chai is a framework-agnostic TDD/BDD assertion library. Sinon.JS provides standalone test spies, stubs and mocks for JavaScript. They complement each other and the author does a good job of explaining when and how to use each. I've written a lot of tests in Python (unittest and mock, primarily) and C# (NUnit), but my experience with JavaScript unit testing was both limited and years out of date. The JavaScript ecosystem continues to evolve rapidly, with new browser frameworks and Node packages springing up everywhere. JavaScript has some particular challenges in testing—notably, asynchrony and callbacks. Mocha, Chai, and Sinon meet those challenges, though they can't take away all the pain. The author describes how to test Backbone models, views, and collections; dealing with asynchrony; provides useful testing heuristics, including isolating components to reduce dependencies; when to use stubs and mocks and fake servers; and test automation with PhantomJS. He does not, however, teach you Backbone.js itself; for that, you'll need another book. There are a few areas which I thought were dealt with too lightly. There's no real discussion of Test-driven_development or Behavior-driven_development, which provide the intellectual foundations of much of the book. Nor does he have much to say about testability and how to make legacy code more testable. The sample Notes app has plenty of testing seams (much of this falls naturally out of the architecture of Backbone); other apps are not so lucky. The chapter on automation is extremely terse—it could be expanded into a very large book!—but it does provide useful indicators to many areas for exploration. I learned a lot from this book and I have no hesitation in recommending it. Disclosure: Thanks to Ryan Roemer and Packt for a review copy of this book.

    Read the article

  • Oracle Database Machine: customer case at OOW2010

    - by rene.kundersma
    I proudly announce that on Openworld 2010, together with TUI I will be co-presenting the customer case on their Database Machine implementation. Our session number is S314935. The sesison will be about the business case, the choices made for the setup, how we did the migration to v1, the migration to v2. Also how we implemented backup/restore and disaster recovery solutions. It will be a very interesting case for everyone interested in customer implementations of the DBM ! Hope to see you there Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Using SMO to drop a SQL Database

    - by ybbest
    SQL Server Management Objects(SMO) is the API you can use to manipulate the sql server,like create databse and delete database. To get more details you can check the msdn documentation. There are 2 ways you can drop a database 1. You could create a Database object and call Drop method: Dim database As Database = New Database(Your database name) database.Drop() 2.However if you have existing connections to the database ,attempting to drop it using the above method will fail.Recall that when you try to drop the database from management studio ,you can tick the check box to close all the connections before drop the database.It is not so obvious , but you can do the exact same thing using SMO: Dim server As Server= New Server(ServerConn) server.KillAllProcesses(Your database name) server.KillDatabase(Your database name)

    Read the article

  • Do you write common pre-conditions for a large number of unit test cases ?

    - by Vinoth Kumar
    I have heard/read writing common pre-conditions for a large number of test cases is a bad thing, since this dependency may cause large number of test cases to fail if something changes . What are your thoughts on it ? If this is so , then what exactly is the purpose of setUp() method in Junit that runs before each test case ? If the same code inside setUp() runs before each test case , why cant it run only once before running all the test cases together ?

    Read the article

  • Database IDs

    - by fatherjack
    Just a quick post, mainly to test out the new blog format but related to a question on the #sqlhelp hashtag. The question came from Justin Dearing (@zippy1981) as: So I take it database_id isn’t an ever incrementing value. #sqlhelp When a new database is created it is given the lowest available ID. This either is in a gap in IDs where a database has been dropped or the database ID is incremented by one from the highest current ID if there are no gaps to fill. To see this in action, connect to your sandbox server and try this: USE MASTER GO CREATE DATABASE cherry GO USE cherry GO SELECT DB_ID() GO CREATE DATABASE grape GO USE grape GO SELECT DB_ID() GO CREATE DATABASE melon GO USE melon GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE grape GO CREATE DATABASE kiwi GO USE kiwi GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE cherry DROP DATABASE melon DROP DATABASE kiwi You should get an incrementing series of database IDs as the databases are created until the last one where the new database gets allocated the ID that is missing because one was dropped.

    Read the article

  • Do database tables need to have IDs?

    - by Arturas M
    Is an ID field is always needed in database tables? In my case I have a user with firstName, lastName and email fields. email is unique and not null, so it could be used as an ID, right? So in that case, could/should I try to remove the ID? Also I want to have another table which extends this one. Let's say its called patient and it has it's own field additionalData and I would like to link the relationship through the email of user I mentioned. So the relationship should be 1 to 1, right? and I wouldn't need the IDs? Somehow MySQL Workbench wants me to use the IDs. What do you guys think. Any suggestions on this topic?

    Read the article

  • .mdf Database Filetype

    - by James Izzard
    Would somebody be kind enough to correct my understanding of the following (if incorrect)? Microsoft's .mdf file-type can be used by both the LocalDB and the full Server database engines (apologies if engine is not the correct word?). The .mdf file does not care which of these two options are accessing it - so you could use either to access any given .mdf file, provided you had permissions and password etc. The LocalDB and the SQL Server are two options that can be interchangeably chosen to access .mdf files depending on the application requirements. Appreciate any clarification. Thanks

    Read the article

  • What if I can't make my unit test fail in "Red, Green, Refactor" of TDD?

    - by Joshua Harris
    So let's say that I have a test: @Test public void MoveY_MoveZero_DoesNotMove() { Point p = new Point(50.0, 50.0); p.MoveY(0.0); Assert.assertAreEqual(50.0, p.Y); } This test then causes me to create the class Point: public class Point { double X; double Y; public void MoveY(double yDisplace) { throw new NotYetImplementedException(); } } Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so: public void MoveY(double yDisplace) { Y += yDisplace; } Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor." Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test. I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.

    Read the article

  • Login - check database if user exists... (c#)

    - by SAMIR BHOGAYTA
    I have managed to do the following... string connectionString = "datasource=localhost;username=xxx;password=xxx;database=xxx"; MySqlConnection mySqlConnection = new MySqlConnection(connectionString); string selectString = "SELECT username, password " + "FROM forum_members " + "WHERE username = '" + frmUsername.Text + "' AND password = '" + frmPassword.Text + "'"; MySqlCommand mySqlCommand = new MySqlCommand(selectString, mySqlConnection); mySqlConnection.Open(); String strResult = String.Empty; strResult = (String)mySqlCommand.ExecuteScalar(); mySqlConnection.Close(); if (strResult.Length == 0) { Label1.Text = "INCORRECT USER/PASS!" //could redirect to register page } else { Label1.Text = "YOU ARE LOGGED IN!"; //set loggin in sessions variables }

    Read the article

  • Webhosting with custom database choice [closed]

    - by churchill614
    Possible Duplicate: How to find web hosting that meets my requirements? I am trying to find somewhere to host a website which uses OrientDB as its database. My budget doesn't stretch to a dedicated server where I can configure everything as I need it. Rather, I am hoping to find somewhere, ideally UK based, that will allow me to install/install for me OrientDB on their server, that is of the normal shared server variety. Is anybody able to point me in a good direction for this please (whilst UK is preferable it is not essential)?

    Read the article

  • How to define a natural id in database?

    - by gcc
    There are a lot of manuals. I am trying to create an database to hold information of these documents. But, there is a small problem. How can I give meaningful id to the manuals? Are there any standard or logic behind the giving meaningful id to the documents? If there is no standard, can you tell me how I should do that? example: table : manual id | manual name EDIT: Not Meaningful ID 1 or M1 or foo 2 C2 bar 3 P123 name ... ... ... (i) (ii) (iii) (i) Not meaningful for me because if some item deleted, there can be gap. ex 1 33 100. (ii) random character can be confusing when one try to give a name to new manual (iii) Why giving name is not preferred is because finding a name to the manual as ID is hard after 500 manuals. Meaningful : New ID * Can be easily produced even if after 1000 manuals * Should not be so complicated

    Read the article

  • What Exactly Does the Wattage Rating on a Power Supply Unit Mean?

    - by Jason Fitzpatrick
    Your PSU is rated 80 Plus Bronze and for 650 watts, but what exactly does that mean? Read on to see how wattage and power efficiency ratings translate to real world use. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Free E-Book - Testing for Continuous Delivery with Visual Studio 2012

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/11/05/free-e-book---testing-for-continuous-delivery-with-visual-studio.aspx At http://msdn.microsoft.com/en-us/library/jj159345.aspx, Microsoft Press are offering the free e-Book, Testing for Continuous Delivery with Visual Studio 2012. "As more software projects adopt a continuous delivery cycle, testing threatens to be the bottleneck in the process. Agile development frequently revisits each part of the source code, but every change requires a re-test of the product. While the skills of the manual tester are vital, purely manual testing can't keep up. Visual Studio 2012 provides many features that remove roadblocks in the testing and debugging process and also help speed up and automate re-testing. " (Please ignore the click to look inside!)

    Read the article

  • How are design-by-contract and property-based testing (QuickCheck) related?

    - by Todd Owen
    Is their only similarity the fact that they are not xUnit (or more precisely, not based on enumerating specific test cases), or is it deeper than that? Property-based testing (using QuickCheck, ScalaCheck, etc) seem well-suited to a functional programming style where side-effects are avoided. On the other hand, Design by Contract (as implemented in Eiffel) is more suited to OOP languages: you can express post-conditions about the effects of methods, not just their return values. But both of them involve testing assertions that are true in general (rather than assertions that should be true for a specific test case). And both can be tested using randomly generated inputs (with QuickCheck this is the only way, whereas with Eiffel I believe it is an optional feature of the AutoTest tool). Is there an umbrella term to encompass both approaches? Or am I imagining a relationship that doesn't really exist.

    Read the article

  • how to give meaningful id to the things in database

    - by gcc
    There are a lot of manuals. I am trying to create an database to hold information of these documents. But, there is a small problem. How can I give meaningful id to the manuals? Are there any standard or logic behind the giving meaningful id to the documents? If there is no standard, can you tell me how I should do that? example: table : manual id | manual name EDIT: Not Meaningful ID 1 or M1 or foo 2 C2 bar 3 P123 name ... ... ... (i) (ii) (iii) (i) Not meaningful for me because if some item deleted, there can be gap. ex 1 33 100. (ii) random character can be confusing when one try to give a name to new manual (iii) Why giving name is not preferred is because finding a name to the manual as ID is hard after 500 manuals. Meaningful : New ID * Can be easily produced even if after 1000 manuals * Should not be so complicated

    Read the article

  • Any pre-rolled System.IO abstraction libraries out there for Unit Testing?

    - by Binary Worrier
    To test methods that use the file system we need to basically put System.IO behind a set of interfaces that we can then mock, I do this with a DiskIO class and interface. As my DiskIO code gets larger (and the grumblings from the we're unconvinced about this TDD thing crowd here in work get louder), I went looking for a comprehensive open source library that already does this and found . . . nothing. I may be looking in the wrong place or have approached this problem in completely the wrong way. I can't be the only idiot in this position, do these libraries exist, if so where are they? Any you've used and would recommend? Thanks P.S. I'm happy with my current approach i.e. starting with what we need, and adding only when the need arises. Unfortunately the we're unconvinced about this TDD thing crowd remain unconvinced, and think that I can't be right.

    Read the article

  • Best Way to Handle Meta Information in a SQL Database

    - by danielhanly.com
    I've got a database where I want to store user information and user_meta information. The reason behind setting it up in this way was because the user_meta side may change over time and I would like to do this without disrupting the master user table. If possible, I would like some advice on how to best set up this meta data table. I can either set it as below: +----+---------+----------+--------------------+ | id | user_id | key | value | +----+---------+----------+--------------------+ | 1 | 1 | email | [email protected] | | 2 | 1 | name | user name | | 3 | 1 | address | test address | ... Or, I can set it as below: +----+---------+--------------------+--------------------+--------------+ | id | user_id | email | name | address | +----+---------+--------------------+--------------------+--------------+ | 1 | 1 | [email protected] | user name | test address | Obviously, the top verison is more flexible, but the bottom version is space saving and perhaps more efficient, returning all the data as a single record. Which is the best way to go about this? Or, am I going about this completely wrong and there's another way I've not thought of?

    Read the article

  • ?Oracle Database ?7??????????????

    - by OTN-J Master
    Oracle Database??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????!!(???????????????????????????????)????????????????????????????????????????OTN?????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????!???????????????(?????????????????) ???7????????????????? ?????????????? ???????? ?28??????LOB????????? ?????????????????????????????????????????????????????????????????????????????????????????13?????????????LOB(Large OBject)???????????????????????????????????????????????????????????????????????????????????????????????????? >> ????? ?????????????!DBA??? ???????? ?19? ????????·????????????????????????????MAA????????????????????????????Oracle???????????????????????Oracle Maximum Availability Architecture??????????????????????MAA??Oracle??????????????????????????????????????????MAA??????????????????????????????????????>> ????? OTN?????????????????????? ?????????????????????????????????

    Read the article

  • How to do a database backup in DB2 in Vista?

    - by Daziplqa
    How to do a database backup in DB2 in Vista? Whenever I issued this command (login in Vista as Administrator): restore database myDB from D: taken at 20081013134446 the command line processor return the following error message: SQL1092N "ADMINISTRATOR" does not have the authority to perform the requested command. SQLSTATE=00000 So, How can I solve this problem?

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >