Search Results

Search found 38336 results on 1534 pages for 'sql wait types'.

Page 400/1534 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • How do i use SQL Server 2008? With Visual Studios?

    - by acidzombie24
    Its a two part question. How do i use SQL Server 2008? With Visual Studios? I started up a dummy project and with server explorer i tried with create new sql server database and add connection using my computer name (it came from a dropdown) as the server location. When i tried to create the database 'TestDB1' i got an error. I dont understand why. Its a fresh install and i have restarted the comp a few times since then. I havent messed with visual studios or the servers or even the control options to disable anything that would have been automatic. So whats with this? -edit- My goals are 1) create a database. 2) Be able to see all the database that exist on the server 3) execute sql queries in the ide 4) be able to browse tables. I dont need all of these but as many possible would be nice.

    Read the article

  • SQL Server 2008. Allow Remote Connections?

    - by George
    I have SQL Server 2000 and 2008 installed on a Windows XP Pro box. I can connect to both db instances locally. From another box, a Windows 7 box, I can connect to the SQL 2000 instance on the first box but I cannot connect to the 2008 instance using the same SQL Server authentication credentials that worked locally. Allow Remote Connections is set to TRUE for both the 2000 and 2008 database instances. What else can I look for to be able to connect to the remote 2008 instance from the Windows 7 box? I am trying to connect using Mgt Studio 2008.

    Read the article

  • Windows Backup failed as another backup or recovery is in progress.

    - by remunda
    Hi, i have set up backup schedule on our server. SQL Server 2008 to 01:00am on windows server 2008 R2 to 4:00am. Sql backup runs well, but system backup ends sometimes with error. The error is : http://technet.microsoft.com/en-us/library/dd364768%28WS.10%29.aspx I mean, that is caused by SQL Server, because it unexpectedly runs backup. this is from sql server log (it runs for every database): Date 4.2.2010 4:00:21 Log SQL Server (Current - 29.1.2010 23:25:00) Source spid72 Message I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. and Date 4.2.2010 4:00:24 Log SQL Server (Current - 29.1.2010 23:25:00) Source Backup Message Database backed up. Database: master, creation date(time): 2010/01/29(23:25:32), pages dumped: 370, first LSN: 859:56:37, last LSN: 859:80:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{17B91D5C-9968-4D11-A7F1-1A31523D32F0}25'}). This is an informational message only. No user action is required. My questions is: Why runs sql backup with windows backup? How can i dissolve this that errors? Thank you a lot.

    Read the article

  • Difference between adding MIME types in IIS via Websites vs Local Computer?

    - by Alex Key
    What is the difference between adding MIME types in these 2 different situations? When in IIS 6 manager... Right click on the computer name (local computer) properties mime types Right click on the "Web sites" folder properties http headers mime types I'm guessing that perhaps option 1 adds MIME types for FTP also? However if that were true i'd expect to be able to add MIME types specifically in the properties of FTP (and not just websites). thanks for your help.

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

  • Trying to connect simple VB6 ADO to SQL Server 2008

    - by Henry
    We have a VB6 App that -for current purposes - is very basic ADO: Dim a New ADODB.Recorset, set some basic properties like Cursor Location and Lock Type, set a Connection String and a .Source like "Select * from CustomerMaster", and .OPEN - nothing fancy here! Yet, on a new SBS installation with SQL Server 2008 across 2 Servers (one for Apps, the other dedicated to SQL!), it dies/hangs/crashes if you try to run such a Query from anything but the SQL Sever box. Initially, we were using the SQLOLEDB.1 driver, which would crash/hang the entire SQL Server after about 4 such queries (built a simple 6 line App just for this purpose). Then switched to the NATIVE SQL driver, which did allow us unlimited, happy queries - until you did the first Change/Update - THEN it would corrupt the SQL Server if you exited and tried to go back in. All this 'corruption' is happening from the 'App Server' of the SBS pair, and I presume that the App Server (also installed in tandem with the SQL server this week) has the latest MDACs, etc. And running it from a 'lowly XP workstation' is (obviously) no better. ANY ideas??? -Henry

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Alternative method of viewing a database diagram in SQL Server to see what tables have gone missing?

    - by Triynko
    I have a database diagram for my database, but when I open it in SQL Server, I almost immediately get a message saying some permissions changed or tables in the diagram were dropped or renamed, and tables in the diagram vanish before I can even scroll over to see what or where they were. Basically, it's saying, "Hey, you know all that time you spent laying out tables in this diagram... half of them are going to vanish when you view it, and I'm not going to tell you which tables vanished or where they were in the diagram. You're just going to see a bunch of random empty spaces where tables used to be ;)" Ridiculous. So I thought that maybe if I look in the dbo.sysdiagrams table, I could look at some plain text definition of the diagram to get a clue about the names of the tables that went missing (because thier names were probably only changed slightly) or their coordinates in the diagram (because their spatial location would give me a clue as to what they were), so that I could re-add them, but I can't, because it's a binary definition. So, is there some other program I could use to view the existing database diagram that's not going to just drop and forget the missing tables without telling me what they were, or is this information lost and at the mercy of some SSMS-proprietary database diagram format and viewer which refuses to cooperate with me.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Everytime user types , in my text box i want it to become ',' or help me do it using a parameter

    - by MyHeadHurts
    I am using a vb.net textbox to become part of my IN sql statement in my program I tryed to use a parameter and it didn't work here is my code TextBox1.Text = "'Cruises','Caribbean and Mexico','CentralSouth America', 'Europe','Far East','France','Italy','London/UK','Middle East/Africa','South Pacific','Spain/Portugal','USA/Canada'" the default value of my textbox although the user can edit the textbox, but they would need to type the ',' which i would rather them just type , . and my other code is If RadioButtonList1.SelectedValue = "Sales" And CheckBox1.Checked = False Then 'saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, dbo.B605SaleAsOfAdvancedMaster.AsOFSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, dbo.B605SaleAsOfAdvancedMaster.YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, dbo.B604SalesAsOfAdvanced.Sales AS CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =" & DropDownList1.SelectedValue & ") AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) order by B605SaleAsOfAdvancedMaster.SDESCR" saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, dbo.B605SaleAsOfAdvancedMaster.AsOFSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, dbo.B605SaleAsOfAdvancedMaster.YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, dbo.B604SalesAsOfAdvanced.Sales AS CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in ('Cruises','Caribbean and Mexico','CentralSouth America', 'Europe','Far East','France','Italy','London/UK','Middle East/Africa','South Pacific','Spain/Portugal','USA/Canada') order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Sales" ElseIf RadioButtonList1.SelectedValue = "NetSales" And CheckBox1.Checked = False Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, (ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFNET,0) + (ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFOther,0))) as AsofSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, (ISNULL(dbo.B605SaleAsOfAdvancedMaster.YENET,0) + (ISNULL(dbo.B605SaleAsOfAdvancedMaster.YEOther,0))) as YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, (ISNULL(dbo.B604SalesAsOfAdvanced.netSales,0) + (ISNULL(dbo.B604SalesAsOfAdvanced.OtherSales,0))) as CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in ('Cruises','Caribbean and Mexico','CentralSouth America', 'Europe','Far East','France','Italy','London/UK','Middle East/Africa','South Pacific','Spain/Portugal','USA/Canada') order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Net Sales" ElseIf RadioButtonList1.SelectedValue = "INSSales" And CheckBox1.Checked = False Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFINS,0)as AsofSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, ISNULL(dbo.B605SaleAsOfAdvancedMaster.YEINS,0) as YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, ISNULL(dbo.B604SalesAsOfAdvanced.INSSales,0) as CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in ('Cruises','Caribbean and Mexico','CentralSouth America', 'Europe','Far East','France','Italy','London/UK','Middle East/Africa','South Pacific','Spain/Portugal','USA/Canada') order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Insurance Sales" ElseIf RadioButtonList1.SelectedValue = "CXSales" And CheckBox1.Checked = False Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFCX,0)as AsofSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, ISNULL(dbo.B605SaleAsOfAdvancedMaster.YECX,0) as YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, ISNULL(dbo.B604SalesAsOfAdvanced.CXSales,0) as CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in ('Cruises','Caribbean and Mexico','CentralSouth America', 'Europe','Far East','France','Italy','London/UK','Middle East/Africa','South Pacific','Spain/Portugal','USA/Canada') order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Canceled Sales" ElseIf RadioButtonList1.SelectedValue = "Sales" And CheckBox1.Checked = True Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, dbo.B605SaleAsOfAdvancedMaster.AsOFSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, dbo.B605SaleAsOfAdvancedMaster.YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, dbo.B604SalesAsOfAdvanced.Sales AS CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in (" & TextBox1.Text & ") order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Sales" ElseIf RadioButtonList1.SelectedValue = "NetSales" And CheckBox1.Checked = True Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, (ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFNET,0) + (ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFOther,0))) as AsofSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, (ISNULL(dbo.B605SaleAsOfAdvancedMaster.YENET,0) + (ISNULL(dbo.B605SaleAsOfAdvancedMaster.YEOther,0))) as YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, (ISNULL(dbo.B604SalesAsOfAdvanced.netSales,0) + (ISNULL(dbo.B604SalesAsOfAdvanced.OtherSales,0))) as CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in (" & TextBox1.Text & ") order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Net Sales" ElseIf RadioButtonList1.SelectedValue = "INSSales" And CheckBox1.Checked = True Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFINS,0)as AsofSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, ISNULL(dbo.B605SaleAsOfAdvancedMaster.YEINS,0) as YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, ISNULL(dbo.B604SalesAsOfAdvanced.INSSales,0) as CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in (" & TextBox1.Text & ") order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Insurance Sales" ElseIf RadioButtonList1.SelectedValue = "CXSales" And CheckBox1.Checked = True Then saocmd1.CommandText = "SELECT dbo.B605SaleAsOfAdvancedMaster.SDESCR, dbo.B605SaleAsOfAdvancedMaster.DYYYY, ISNULL(dbo.B605SaleAsOfAdvancedMaster.AsOFCX,0)as AsofSales, dbo.B605SaleAsOfAdvancedMaster.ASOFPAX, ISNULL(dbo.B605SaleAsOfAdvancedMaster.YECX,0) as YESales, dbo.B605SaleAsOfAdvancedMaster.YEPAX, ISNULL(dbo.B604SalesAsOfAdvanced.CXSales,0) as CurrentSales, dbo.B604SalesAsOfAdvanced.PAX AS CurrentPAX FROM B604SalesAsOfAdvanced INNER JOIN B605SaleAsOfAdvancedMaster ON dbo.B605SaleAsOfAdvancedMaster.SDESCR = B604SalesAsOfAdvanced.SDESCR WHERE (B605SaleAsOfAdvancedMaster.DYYYY =@Dyyyy) AND (B604SalesAsOfAdvanced.DYYYY = (DatePart(year, GetDate()) +1)) and dbo.B605SaleAsOfAdvancedMaster.SDESCR in (" & TextBox1.Text & ") order by B605SaleAsOfAdvancedMaster.SDESCR" Label2.Text = "Canceled Sales" End If Basically what is happening is, if a certain radio button is selected and the user didn't click the checkbox the default regions are included and they are hardcoded because the query runs much faster. if the user did click the checkbox then the textbox where they type the specific regions shows up and it will run the query that includes the dbo.B605SaleAsOfAdvancedMaster.SDESCR in (" & TextBox1.Text & ") If you can somehow do this using parameters and not with the textbox1.text in the query it will run much faster for me thanks for your help

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Haskell Linear Algebra Matrix Library for Arbitrary Element Types

    - by Johannes Weiß
    I'm looking for a Haskell linear algebra library that has the following features: Matrix multiplication Matrix addition Matrix transposition Rank calculation Matrix inversion is a plus and has the following properties: arbitrary element (scalar) types (in particular element types that are not Storable instances). My elements are an instance of Num, additionally the multiplicative inverse can be calculated. The elements mathematically form a finite field (??2256). That should be enough to implement the features mentioned above. arbitrary matrix sizes (I'll probably need something like 100x100, but the matrix sizes will depend on the user's input so it should not be limited by anything else but the memory or the computational power available) as fast as possible, but I'm aware that a library for arbitrary elements will probably not perform like a C/Fortran library that does the work (interfaced via FFI) because of the indirection of arbitrary (non Int, Double or similar) types. At least one pointer gets dereferenced when an element is touched (written in Haskell, this is not a real requirement for me, but since my elements are no Storable instances the library has to be written in Haskell) I already tried very hard and evaluated everything that looked promising (most of the libraries on Hackage directly state that they wont work for me). In particular I wrote test code using: hmatrix, assumes Storable elements Vec, but the documentation states: Low Dimension : Although the dimensionality is limited only by what GHC will handle, the library is meant for 2,3 and 4 dimensions. For general linear algebra, check out the excellent hmatrix library and blas bindings I looked into the code and the documentation of many more libraries but nothing seems to suit my needs :-(. Update Since there seems to be nothing, I started a project on GitHub which aims to develop such a library. The current state is very minimalistic, not optimized for speed at all and only the most basic functions have tests and therefore should work. But should you be interested in using or helping out developing it: Contact me (you'll find my mail address on my web site) or send pull requests.

    Read the article

  • Lua Alien Module - Trouble using WriteProcessMemory function, unsure on types (unit32)

    - by jefferysanders
    require "alien" --the address im trying to edit in the Mahjong game on Win7 local SCOREREF = 0x0744D554 --this should give me full access to the process local ACCESS = 0x001F0FFF --this is my process ID for my open window of Mahjong local PID = 1136 --function to open proc local op = alien.Kernel32.OpenProcess op:types{ ret = "pointer", abi = "stdcall"; "int", "int", "int"} --function to write to proc mem local wm = alien.Kernel32.WriteProcessMemory wm:types{ ret = "long", abi = "stdcall"; "pointer", "pointer", "pointer", "long", "pointer" } local pRef = op(ACCESS, true, PID) local buf = alien.buffer("99") -- ptr,uint32,byte arr (no idea what to make this),int, ptr print( wm( pRef, SCOREREF, buf, 4, nil)) --prints 1 if success, 0 if failed So that is my code. I am not even sure if I have the types set correctly. I am completely lost and need some guidance. I really wish there was more online help/documentation for alien, it confuses my poor brain. What utterly baffles me is that it WriteProcessMemory will sometimes complete successfully (though it does nothing at all, to my knowledge) and will also sometimes fail to complete successfully. As I've stated, my brain hurts. Any help appreciated.

    Read the article

  • Silverlight WCF service consuming inherited types in datacontract

    - by RemotecUk
    Hi, Im trying to consume a WCF service in silverlight... What I have done is to create two seperate assemblies for my datacontracts... Assembly that contains all of my types marked with data contracts build against .Net 3.5 A Silverlight assembly which links to files in the 1st assembly. This means my .Net app can reference assembly 1 and my silverlight app assembly 2. This works fine and I can communicate across the service. The problems occur when I try to transfer inherited classed. I have the following class stucture... IFlight - an interface for all types of flights. BaseFlight : IFlight - a baseflight flight implements IFlight AdhocFlight : BaseFlight, IFlight - an adhoc flight inherits from baseflight and also implements IFlight. I can successfully transfer base flights across the service. However I really need to be able to transfer objects of IFlight across the interface as I want one operation contract that can transfer many types of flight... public IFlight GetFlightBooking() { AdhocFlight af = new AdhocFlight(); return af; } ... should work I think? However I get the error: "The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error." Any ideas would be appreciated.

    Read the article

  • Combining multiple content types into a single search result with Drupal 6 and Views 2

    - by Chaulky
    Hi all, I need to create a somewhat advanced search functionality for my Drupal 6 site. I have a one-to-many relationship between two content types and need to search them, respecting that relationship. To make things more clear... I have content types TypeX and TypeY. TypeY has a node reference CCK field that relates it to a single node of TypeX. So, many nodes of TypeY reference the same node of TypeX. I want to use Views 2 to create a search page for these nodes. I want each search result to be a node of TypeX, along with all the nodes of TypeY that reference it. I know I could just theme the individual results and use a view to add the nodes of TypeY to the single node of TypeX... but that won't allow users to actually search TypeY... it would only search TypeX and merely display some nodes of TypeY along with it. Is there anyway to get the search to account for content in nodes of both content types, but merge the TypeY results into the "parent" node of TypeX? In database terms, it seems like I need to do a join, then filter by the search terms. But I can't figure out how to do this in Views. Thanks for any help i can get!!!

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • Fluently.Configure without explicitly entering types.

    - by user86431
    I'm trying to take my fluent mapping past the basic stuff that I've found here: http://wiki.fluentnhibernate.org/Fluent_configuration Where they explicitly add each type like this: ISessionFactory localFactory = Fluently.Configure() .Database( ObjectFactory.GetInstance<SybaseConfiguration>().GetSybaseDialect( "BLAH" ) ) .Mappings( m => { m.HbmMappings .AddFromAssemblyOf<StudTestEO>(); m.FluentMappings .AddFromAssemblyOf<StudTestEO>() .AddFromAssemblyOf<StudTestEOMap>(); } ) .BuildSessionFactory(); .BuildSessionFactory(); and trying to be able to take the assembly, get a list of it's types and pass that in instead kindf like this string FullEoAssemblyFName = webAccessHdl.GetMapPath(EoAssemblyFName); string FullMapAssemblyFName = webAccessHdl.GetMapPath(MapAssemblyFName); string FullConfigFileName = webAccessHdl.GetMapPath("~/" + NHibernateConfigFileName); if (!File.Exists(FullEoAssemblyFName)) throw new Exception("GetFactoryByConfigFile, EoAssemblyFName does not exist>" + FullEoAssemblyFName + "<"); if (!File.Exists(FullMapAssemblyFName)) throw new Exception("GetFactoryByConfigFile, MapAssemblyFName does not exist>" + FullMapAssemblyFName + "<"); if (!File.Exists(FullConfigFileName)) throw new Exception("GetFactoryByConfigFile, ConfigFile does not exist>" + FullConfigFileName + "<"); Configuration configuration = new Configuration(); Assembly EoAssembly = Assembly.LoadFrom(webAccessHdl.GetMapPath(EoAssemblyFName)); Assembly MapAssembly = Assembly.LoadFrom(webAccessHdl.GetMapPath(MapAssemblyFName)); Type[] EoType = EoAssembly.GetTypes(); Type[] MapType = MapAssembly.GetTypes(); ISessionFactory localFactory = fluent.Mappings( m => { // how do i add all the types from type array here? m.FluentMappings.Add(MapAssembly).AddFromAssembly(EoAssembly); } ) .BuildSessionFactory(); To get it to load the types genericly instead of explicitly.. has anyone done this, or see any good links to articles I should look at? Thanks, E-

    Read the article

  • WPML is not detecting Wordpress custom post types

    - by janoChen
    I learned how to make custom post types with the following tutorial: http://sixrevisions.com/wordpress/wordpress-custom-post-types-guide/ It basically consist in adding this code to function.php: add_action( 'init', 'create_events' ); function create_events() { $labels = array( 'name' => _x('Events', 'post type general name'), 'singular_name' => _x('Event', 'post type singular name'), 'add_new' => _x('Add New', 'Event'), 'add_new_item' => __('Add New Event'), 'edit_item' => __('Edit Event'), 'new_item' => __('New Event'), 'view_item' => __('View Event'), 'search_items' => __('Search Events'), 'not_found' => __('No Events found'), 'not_found_in_trash' => __('No Events found in Trash'), 'parent_item_colon' => '' ); $supports = array('title', 'editor', 'custom-fields', 'revisions', 'excerpt'); register_post_type( 'event', array( 'labels' => $labels, 'public' => true, 'supports' => $supports ) ); } The WPML plugin help you to translate posts to different langauges. But this option doesn't appear in the custom post types. Any suggestions?

    Read the article

  • List of Lists of different types

    - by themarshal
    One of the data structures in my current project requires that I store lists of various types (String, int, float, etc.). I need to be able to dynamically store any number of lists without knowing what types they'll be. I tried storing each list as an object, but I ran into problems trying to cast back into the appropriate type (it kept recognizing everything as a List<String>). For example: List<object> myLists = new List<object>(); public static void Main(string args[]) { // Create some lists... // Populate the lists... // Add the lists to myLists... for (int i = 0; i < myLists.Count; i++) { Console.WriteLine("{0} elements in list {1}", GetNumElements(i), i); } } public int GetNumElements(int index) { object o = myLists[index]; if (o is List<int>) return (o as List<int>).Count; if (o is List<String>) // <-- Always true!? return (o as List<String>).Count; // <-- Returning 0 for non-String Lists return -1; } Am I doing something incorrectly? Is there a better way to store a list of lists of various types, or is there a better way to determine if something is a list of a certain type?

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >