Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 511/1506 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • writting becomes slow after few writes

    - by user1566277
    I am running an embedded Linux on arm with a SD-Card. While writing huge amounts of data I see bizarre effects. E.g, when I dd a 15 MB file few times, it writes the file (normally) in less than 2 Secs. But After lets say 3-4 times it takes sometimes 15 to 30 Seconds to write the same file. If I sync after writing the file, then this does not happen but sync takes long time too. If there is enough gap between writing two files than presumably kernel syncs itself. How can I optimize the whole performance so that write should always finish inside 2 Seconds. The File system I am using is ext3. Any pointers?

    Read the article

  • Enterprise Level Monitoring Solution

    - by Garthmeister J.
    My company is currently looking to replace our current solution used for monitoring our web-based enterprise solutions for both up-time and performance. Please note this is not intended to be a network monitoring-type solution (internally we currently use Nagios). If anyone has a provider that they have had a positive experience with, it would be much appreciated. Here is a list of our requirements: • Must have a large number of probes/agents around the globe to be representative of our customer base • Must have a flexible scripting capability to automate multi-step user actions • 24 hour a day monitoring • Flexible alerting system • Report generation capability • Mimic browser specific monitoring (optional, not a must-have)

    Read the article

  • Is virtual machine slower than the underlying physical machine?

    - by Michal Illich
    This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)? Did anyone measure performance difference of web server or db server (virtual VS physical)? If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

    Read the article

  • Specific apache + mysql settings for a light-weight site

    - by Good Person
    I have a small website with a Joomla and a Moodle set up. It seems that both of these are very slow. The server (CentOS release 5.5 (Final)) is a virtual dedicated server with about 2GB of ram. I don't expect to ever get more than 10-15 people on at the same time (and if that is high) What settings could I change in either apache, mysql, or even the OS to increase the performance of my site? I'm not concerned about running out of resources if I get too many visitors. If you need more specific data leave a comment and I'll edit the question.

    Read the article

  • How do I stop my IIS App Pool making a request to wpad.mydomain.com?

    - by Programming Hero
    As part of some performance troubleshooting, I've monitored the slow startup of a "cold" App Pool (one without an active worker process) in IIS. When using a built-in account, the App Pool starts in sub-second time. When using a custom local account the App Pool takes 30+ seconds to start processing requests. The service appears to be making requests to wpad.mydomain.com, an address it does not have access to, which causes it to wait 30 seconds for a response before eventually timing out. As a workaround, I've added the hostname to the server's hosts file, to direct the traffic to the local machine, which returns much faster (1-2 seconds). What do I need to do to stop IIS making this request when this identity is used for the App Pool?

    Read the article

  • Relating ping to perceived browser GUI response

    - by cvsdave
    We periodically get complaints of poor GUI (browser page) response that we need to explore. I am looking for a quick and cheap first check to see if the issue is network latency, or server performance. Has anyone encountered any discussion of ping time and perceived GUI response? I understand that GUI response is complicated, but it would be nice if we could find or develop a rule of thumb along the lines of "Hmmmm, ping is over 200, it might be network problems". Ideally, this lives in a script on the user's machine so that we can see the latency that they are seeing... (BASH, Linux). A reference to a good discussion page would be a fine answer, as would any recommendation of other source material.

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • Monitoring disk block access in Linux

    - by VoidPointer
    Is there a way to gather statistics about blocks being accessed on a disk? I have a scenario where a task is both memory and I/O intensive and I need to find a good balance as to how much of the available RAM I can assign to the process and how much I should leave for the system for building its I/O cache for the block device being used. I suspect that most of the I/O that is currently happening is accessing a rather small subset of the device and that performance could be optimized by increasing the RAM that is available for I/O buffering. Ideally, I would be able to create something like a "heat-map" that shows me which parts of the disk are accessed most of the time.

    Read the article

  • Computers with Small Capacity SSD - For caching?

    - by RXC
    Recently, in newsletters from websites, I have been seeing computers for sale from manufacturers that include an HDD and an SSD but the SSD has a small capacity like 24 GBs. I don't know if this still holds true, but I learned that when building a computer, you would want to install your OS on your fastest hard drive. I do a lot of PC gaming, so I install my OS and games on my SSD, because I learned that games and many applications make lots of system calls to the OS and performance can only be as fast as the slowest piece. Why these computers come with small capacity SSDs? Most OS's take up around 20 to 30 GBs of space, so what are the benefits of such a small SSD? Are these small size SSDs for caching? and what exactly does caching mean (what does it do and how does it help)?

    Read the article

  • How does Google manage to serve results so fast?

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance?

    Read the article

  • Discount Codes Galore

    - by Cassandra Clark
    Saving money is at the top of everyones list right now. With this in mind the Oracle Technology Network team has compiled a list of discounts available at the Oracle Store. We are also introducing an Oracle Technology Network member discount from O'Reilly Media. If you subscribe to any of the Oracle Technology newsletters you also saw special discounts from CRC Press, Packt Publishing and Apress. We are going to do our best to bring you more offers like this every month. Now on to the discounts... Oracle Store offers - all below expiring May 31st 2010. Don't miss out! Expand Your Productivity with Oracle Open Office and Save 15%? Enter OTNOffice at checkout. Buy Now! Drive Business Agility and Performance with Industry-leading Oracle Database Management Packs.  Save 10% when you purchase them at the Oracle Store. Enter OTNDBMP at checkout. Buy Now! 15% Savings on Oracle Virtualization and Unbreakable Linux Support at the Oracle Store Enter code OTNLinuxVM at checkout. Buy Now! 20% Savings on Oracle SQL Developer Data Modeler Use OTNSQL at checkout. Buy Now! O'Reilly Oracle Technology Network Member Offer O'Reilly is generously offering Oracle Technology Network Members 35% off for print books and 40% off of eBooks. Browse Oracle titles at- http://oreilly.com/pub/topic/oracle. Use discount code TECNT at checkout.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in vc++ application. This is the code I am using to load the file which I used to load a .bmp file BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData-AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("Unhandle Exception"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk funcion it gives me all 0,s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated. Thanks in advance.

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • A few problems with Delphi involving Mail Merge, SQL + Databases.

    - by Daniel
    My first problem is with mail merge. I have created a a Data File and a table, yet I am not able to fill my table with information from my Data File. The << just seems to be inserted after wherever the cursor is on the page, which is not where the table is. All that is entered into the actual table is a '59'. Therefore I think I either need to to change the code or be able to move the cursor. Here is the code I am currently using: wrdDoc.Tables.Add(wrdSelection.Range, ADOTable1.FieldCount, 3); wrdDoc.Tables.Item(1).Columns.Item(1).SetWidth(51,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(2).SetWidth(20,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(3).SetWidth(100,wdAdjustNone); // Set the shading on the first row to light gray wrdDoc.Tables.Item(1).Rows.Item(1).Cells .Shading.BackgroundPatternColorIndex := wdGray25; // BOLD the first row wrdDoc.Tables.Item(1).Rows.Item(1).Range.Bold := True; // Center the text in Cell (1,1) wrdDoc.Tables.Item(1).Cell(1,1).Range.Paragraphs.Alignment := wdAlignParagraphCenter; // Fill each row of the table with data wrdDoc.Tables.Item(1).Cell(1, 1).Range.InsertAfter('Time'); wrdDoc.Tables.Item(1).Cell(1, 2).Range.InsertAfter(''); wrdDoc.Tables.Item(1).Cell(1, 3).Range.InsertAfter('Teacher'); For Count := 1 to (ADOTable1.FieldCount - 1) do begin wrdDoc.Tables.Item(1).Cell((Count + 1), 1).Range.InsertAfter(wrdSelection.Range,'Time' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 2).Range.InsertAfter(wrdSelection.Range,'THonorific' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 3).Range.InsertAfter(wrdSelection.Range,'TSurname' + IntToStr(Count)); end; My second problem is that I do not know what the correct SQL syntax is for editing the name of a column in the database (I am using Delphi 7 and Microsoft Jet Engine if that makes a difference). The third problem is that when I add a new column to my database manually (which I need to do) I get a 'violation' error in one of my units when I activate an ADOTable. This only happens on one unit and it happens when I add a column with any name anywhere in the table. I know that is vague but I can't seem to narrow down the problem any further than that. If you could help with me with any of those it would be great. Thanks.

    Read the article

  • What is the best approach in SQL to store multi-level descriptions?

    - by gime
    I need a new perspective on how to design a reliable and efficient SQL database to store multi-level arrays of data. This problem applies to many situations but I came up with this example: There are hundreds of products. Each product has an undefined number of parts. Each part is built from several elements. All products are described in the same way. All parts would require the same fields to describe them (let's say: price, weight, part name), all elements of all parts also have uniform design (for example: element code, manufacturer). Plain and simple. One element may be related to only part, and each part is related to one product only. I came up with idea of three tables: Products: -------------------------------------------- prod_id prod_name prod_price prod_desc 1 hoover 120 unused next Parts: ---------------------------------------------------- part_id part_name part_price part_weight prod_id 3 engine 10 20 1 and finally Elements: --------------------------------------- el_id el_code el_manufacturer part_id 1 BFG12 GE 3 Now, select a desired product, select all from PARTS where prod_id is the same, and then select all from ELEMENTS where part_id matches - after multiple queries you've got all data. I'm just not sure if this is the right approach. I've got also another idea, without ELEMENTS table. That would decrease queries but I'm a bit afraid it might be lame and bad practice. Instead of ELEMENTS table there are two more fields in the PARTS table, so it looks like this: part_id, part_name, part_price, part_weight, prod_id, part_el_code, part_el_manufacturer they would be text type, and for each part, information about elements would be stored as strings, this way: part_el_code | code_of_element1; code_of_element2; code_of_element3 part_el_manufacturer | manuf_of_element1; manuf_of_element2; manuf_of_element3 Then all we need is to explode() data from those fields, and we get arrays, easy to display. Of course this is not perfect and has some limitations, but is this idea ok? Or should I just go with the first idea? Or maybe there is a better approach to this problem? It's really hard to describe it in few words, and that means it's hard to search for answer. Also, understanding the principles of designing databases is not that easy as it seems.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • SQL Server script commands to check if object exists and drop it

    - by deadlydog
    Over the past couple years I’ve been keeping track of common SQL Server script commands that I use so I don’t have to constantly Google them.  Most of them are how to check if a SQL object exists before dropping it.  I thought others might find these useful to have them all in one place, so here you go: 1: --=============================== 2: -- Create a new table and add keys and constraints 3: --=============================== 4: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 5: BEGIN 6: CREATE TABLE [dbo].[TableName] 7: ( 8: [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) 9: [ColumnName2] INT NULL, 10: [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') 11: ) 12: 13: -- Add the table's primary key 14: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED 15: ( 16: [ColumnName1], 17: [ColumnName2] 18: ) 19: 20: -- Add a foreign key constraint 21: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY 22: ( 23: [ColumnName1], 24: [ColumnName2] 25: ) 26: REFERENCES [dbo].[Table2Name] 27: ( 28: [OtherColumnName1], 29: [OtherColumnName2] 30: ) 31: 32: -- Add indexes on columns that are often used for retrieval 33: CREATE INDEX IN_ColumnNames ON [dbo].[TableName] 34: ( 35: [ColumnName2], 36: [ColumnName3] 37: ) 38: 39: -- Add a check constraint 40: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) 41: END 42: 43: --=============================== 44: -- Add a new column to an existing table 45: --=============================== 46: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 47: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 48: BEGIN 49: ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) 50: 51: -- Add a description extended property to the column to specify what its purpose is. 52: EXEC sys.sp_addextendedproperty @name=N'MS_Description', 53: @value = N'Add column comments here, describing what this column is for.' , 54: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 55: @level1name = N'TableName', @level2type=N'COLUMN', 56: @level2name = N'ColumnName' 57: END 58: 59: --=============================== 60: -- Drop a table 61: --=============================== 62: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 63: BEGIN 64: DROP TABLE [dbo].[TableName] 65: END 66: 67: --=============================== 68: -- Drop a view 69: --=============================== 70: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') 71: BEGIN 72: DROP VIEW [dbo].[ViewName] 73: END 74: 75: --=============================== 76: -- Drop a column 77: --=============================== 78: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 79: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 80: BEGIN 81: 82: -- If the column has an extended property, drop it first. 83: IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', 84: N'TableName', N'COLUMN', N'ColumnName') 85: BEGIN 86: EXEC sys.sp_dropextendedproperty @name=N'MS_Description', 87: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 88: @level1name = N'TableName', @level2type=N'COLUMN', 89: @level2name = N'ColumnName' 90: END 91: 92: ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] 93: END 94: 95: --=============================== 96: -- Drop Primary key constraint 97: --=============================== 98: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' 99: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') 100: BEGIN 101: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] 102: END 103: 104: --=============================== 105: -- Drop Foreign key constraint 106: --=============================== 107: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' 108: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') 109: BEGIN 110: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] 111: END 112: 113: --=============================== 114: -- Drop Unique key constraint 115: --=============================== 116: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 117: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') 118: BEGIN 119: ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] 120: END 121: 122: --=============================== 123: -- Drop Check constraint 124: --=============================== 125: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' 126: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') 127: BEGIN 128: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] 129: END 130: 131: --=============================== 132: -- Drop a column's Default value constraint 133: --=============================== 134: DECLARE @ConstraintName VARCHAR(100) 135: SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id 136: WHERE s.xtype='d' AND c.cdefault=s.id 137: AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') 138: 139: IF @ConstraintName IS NOT NULL 140: BEGIN 141: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 142: END 143: 144: --=============================== 145: -- Example of how to drop dynamically named Unique constraint 146: --=============================== 147: DECLARE @ConstraintName VARCHAR(100) 148: SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 149: WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 150: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') 151: 152: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 153: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) 154: BEGIN 155: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 156: END 157: 158: --=============================== 159: -- Check for and drop a temp table 160: --=============================== 161: IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName 162: 163: --=============================== 164: -- Drop a stored procedure 165: --=============================== 166: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND 167: ROUTINE_NAME = 'StoredProcedureName') 168: BEGIN 169: DROP PROCEDURE [dbo].[StoredProcedureName] 170: END 171: 172: --=============================== 173: -- Drop a UDF 174: --=============================== 175: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND 176: ROUTINE_NAME = 'UDFName') 177: BEGIN 178: DROP FUNCTION [dbo].[UDFName] 179: END 180: 181: --=============================== 182: -- Drop an Index 183: --=============================== 184: IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') 185: BEGIN 186: DROP INDEX TableName.IndexName 187: END 188: 189: --=============================== 190: -- Drop a Schema 191: --=============================== 192: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') 193: BEGIN 194: EXEC('DROP SCHEMA SchemaName') 195: END And here’s the same code, just not in the little code view window so that you don’t have to scroll it.--=============================== -- Create a new table and add keys and constraints --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN CREATE TABLE [dbo].[TableName]  ( [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) [ColumnName2] INT NULL, [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') ) -- Add the table's primary key ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED ( [ColumnName1],  [ColumnName2] ) -- Add a foreign key constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY ( [ColumnName1],  [ColumnName2] ) REFERENCES [dbo].[Table2Name]  ( [OtherColumnName1],  [OtherColumnName2] ) -- Add indexes on columns that are often used for retrieval CREATE INDEX IN_ColumnNames ON [dbo].[TableName] ( [ColumnName2], [ColumnName3] ) -- Add a check constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) END --=============================== -- Add a new column to an existing table --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) -- Add a description extended property to the column to specify what its purpose is. EXEC sys.sp_addextendedproperty @name=N'MS_Description',  @value = N'Add column comments here, describing what this column is for.' ,  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END --=============================== -- Drop a table --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN DROP TABLE [dbo].[TableName] END --=============================== -- Drop a view --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') BEGIN DROP VIEW [dbo].[ViewName] END --=============================== -- Drop a column --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN -- If the column has an extended property, drop it first. IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', N'TableName', N'COLUMN', N'ColumnName') BEGIN EXEC sys.sp_dropextendedproperty @name=N'MS_Description',  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] END --=============================== -- Drop Primary key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] END --=============================== -- Drop Foreign key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] END --=============================== -- Drop Unique key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') BEGIN ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] END --=============================== -- Drop Check constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] END --=============================== -- Drop a column's Default value constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id WHERE s.xtype='d' AND c.cdefault=s.id  AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') IF @ConstraintName IS NOT NULL BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Example of how to drop dynamically named Unique constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS  WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Check for and drop a temp table --=============================== IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName --=============================== -- Drop a stored procedure --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND ROUTINE_NAME = 'StoredProcedureName') BEGIN DROP PROCEDURE [dbo].[StoredProcedureName] END --=============================== -- Drop a UDF --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND  ROUTINE_NAME = 'UDFName') BEGIN DROP FUNCTION [dbo].[UDFName] END --=============================== -- Drop an Index --=============================== IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') BEGIN DROP INDEX TableName.IndexName END --=============================== -- Drop a Schema --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') BEGIN EXEC('DROP SCHEMA SchemaName') END

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >