Search Results

Search found 118 results on 5 pages for 'varbinary'.

Page 5/5 | < Previous Page | 1 2 3 4 5 

  • How to Store and Retrieve Images Using SQL Server (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into a SQL Server database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the SQL Server environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated.

    Read the article

  • PHP is truncating MSSQL Blob data (4096b), even after setting INI values. Am I missing one?

    - by Dutchie432
    I am writing a PHP script that goes through a table and extracts the varbinary(max) blob data from each record into an external file. The code is working perfectly, except when a file is over 4096b - the data is truncated at exactly 4096. I've modified the values for mssql.textlimit, mssql.textsize, and odbc.defaultlrl without any success. Am I missing something here? <?php ini_set("mssql.textlimit" , "2147483647"); ini_set("mssql.textsize" , "2147483647"); ini_set("odbc.defaultlrl", "0"); include_once('common.php'); $id=$_REQUEST['i']; $q = odbc_exec($connect, "Select id,filename,documentBin from Projectdocuments where id = $id"); if (odbc_fetch_row($q)){ echo "Trying $filename ... "; $fileName="projectPhotos/docs/".odbc_result($q,"filename"); if (file_exists($fileName)){ unlink($fileName); } if($fh = fopen($fileName, "wb")) { $binData=odbc_result($q,"documentBin"); fwrite($fh, $binData) ; fclose($fh); $size = filesize($fileName); echo ("$fileName<br />Done ($size)<br><br>"); }else { echo ("$fileName Failed<br>"); } } ?> OUTPUT Trying ... projectPhotos/docs/file1.pdf Done (4096) Trying ... projectPhotos/docs/file2.zip Done (4096) Trying ... projectPhotos/docsv3.pdf Done (4096) etc..

    Read the article

  • Passing huge amounts of data as an hexadecimal (0x123AB...) parameter of a clr stored procedure in s

    - by user193655
    I post this question has followup of This question, since the thread is not recieving more answers. I'm trying to understand if it is possible to pass as a parameter of a CLR stored procedure a large amount of data as "0x5352532F...". This is to avoid to send the data directly to the CLR stored procedure, instead of sending ti to a temporary DB field and from there passing it as varbinary(max) parmeter to the CLR stored procedure. I have a triple question: 1) is it possible, if yes how? Let's say i want to pass a pdf file to the CLR stored procedure (not the path, the full bits that make up the file). Something like: exec MyCLRStoredProcs.dbo.insertfile @file_remote_path ='c:\temp\test_file.txt' , @file_contents=0x4D5A90000300000004000.... --(this long list is the file content) where insertfile is a stored proc that writes to the server path (at file_remote_path) the binary data I pass as (file_contents). 2) is it there corruption risk of adopting this approach (or it is the same approach that sql server uses behind the scenes)? 3) how to convert the content of a file into the "0x23423..." hexadecimal representation

    Read the article

  • SQL Server 2008: FileStream Insertion Failure w/ .NET 3.5SP1

    - by James Alexander
    I've configured a db w/ a FileStream group and have a table w/ File type on it. When attempting to insert a streamed file and after I create the table row, my query to read the filepath out and the buffer returns a null file path. I can't seem to figure out why though. Here is the table creation script: /****** Object: Table [dbo].[JobInstanceFile] Script Date: 03/22/2010 18:05:36 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[JobInstanceFile]( [JobInstanceFileId] [int] IDENTITY(1,1) NOT NULL, [JobInstanceId] [int] NOT NULL, [File] [varbinary](max) FILESTREAM NULL, [FileId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Created] [datetime] NOT NULL, CONSTRAINT [PK_JobInstanceFile] PRIMARY KEY CLUSTERED ( [JobInstanceFileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup], UNIQUE NONCLUSTERED ( [FileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[JobInstanceFile] ADD DEFAULT (newid()) FOR [FileId] GO Here's my proc I call to create the row before streaming the file: /****** Object: StoredProcedure [dbo].[JobInstanceFileCreate] Script Date: 03/22/2010 18:06:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO create proc [dbo].[JobInstanceFileCreate] @JobInstanceId int, @Created datetime as insert into JobInstanceFile (JobInstanceId, FileId, Created) values (@JobInstanceId, newid(), @Created) select scope_identity() GO And lastly, here's the code I'm using: public int CreateJobInstanceFile(int jobInstanceId, string filePath) { using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["ConsumerMarketingStoreFiles"].ConnectionString)) using (var fileStream = new FileStream(filePath, FileMode.Open)) { connection.Open(); var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); try { //create the JobInstanceFile instance var command = new SqlCommand("JobInstanceFileCreate", connection) { Transaction = tran }; command.CommandType = CommandType.StoredProcedure; command.Parameters.AddWithValue("@JobInstanceId", jobInstanceId); command.Parameters.AddWithValue("@Created", DateTime.Now); int jobInstanceFileId = Convert.ToInt32(command.ExecuteScalar()); //read out the filestream transaction context to stream the file for storage command.CommandText = "select [File].PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() from JobInstanceFile where JobInstanceFileId = @JobInstanceFileId"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@JobInstanceFileId", jobInstanceFileId); using (SqlDataReader dr = command.ExecuteReader()) { dr.Read(); //get the file path we're writing out to string writePath = dr.GetString(0); using (var writeStream = new SqlFileStream(writePath, (byte[])dr.GetValue(1), FileAccess.ReadWrite)) { //copy from one stream to another byte[] bytes = new byte[65536]; int numBytes; while ((numBytes = fileStream.Read(bytes, 0, 65536)) 0) writeStream.Write(bytes, 0, numBytes); } } tran.Commit(); return jobInstanceFileId; } catch (Exception e) { tran.Rollback(); throw e; } } } Can someone please let me know what I'm doing wrong. In the code, the following expression is returning null for the file path and shouldn't be: //get the file path we're writing out to string writePath = dr.GetString(0); The server is different then the computer the code is running on but the necessary shares appear to be in order and I have also run the following: EXEC sp_configure filestream_access_level, 2 Any help would be greatly appreciated. Thanks!

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • How to get compatibility between C# and SQL2k8 AES Encryption?

    - by Victor Rodrigues
    I have an AES encryption being made on two columns: one of these columns is stored at a SQL Server 2000 database; the other is stored at a SQL Server 2008 database. As the first column's database (2000) doesn't have native functionality for encryption / decryption, we've decided to do the cryptography logic at application level, with .NET classes, for both. But as the second column's database (2008) allow this kind of functionality, we'd like to make the data migration using the database functions to be faster, since the data migration in SQL 2k is much smaller than this second and it will last more than 50 hours because of being made at application level. My problem started at this point: using the same key, I didn't achieve the same result when encrypting a value, neither the same result size. Below we have the full logic in both sides.. Of course I'm not showing the key, but everything else is the same: private byte[] RijndaelEncrypt(byte[] clearData, byte[] Key) { var memoryStream = new MemoryStream(); Rijndael algorithm = Rijndael.Create(); algorithm.Key = Key; algorithm.IV = InitializationVector; var criptoStream = new CryptoStream(memoryStream, algorithm.CreateEncryptor(), CryptoStreamMode.Write); criptoStream.Write(clearData, 0, clearData.Length); criptoStream.Close(); byte[] encryptedData = memoryStream.ToArray(); return encryptedData; } private byte[] RijndaelDecrypt(byte[] cipherData, byte[] Key) { var memoryStream = new MemoryStream(); Rijndael algorithm = Rijndael.Create(); algorithm.Key = Key; algorithm.IV = InitializationVector; var criptoStream = new CryptoStream(memoryStream, algorithm.CreateDecryptor(), CryptoStreamMode.Write); criptoStream.Write(cipherData, 0, cipherData.Length); criptoStream.Close(); byte[] decryptedData = memoryStream.ToArray(); return decryptedData; } This is the SQL Code sample: open symmetric key columnKey decryption by password = N'{pwd!!i_ll_not_show_it_here}' declare @enc varchar(max) set @enc = dbo.VarBinarytoBase64(EncryptByKey(Key_GUID('columnKey'), 'blablabla')) select LEN(@enc), @enc This varbinaryToBase64 is a tested sql function we use to convert varbinary to the same format we use to store strings in the .net application. The result in C# is: eg0wgTeR3noWYgvdmpzTKijkdtTsdvnvKzh+uhyN3Lo= The same result in SQL2k8 is: AI0zI7D77EmqgTQrdgMBHAEAAACyACXb+P3HvctA0yBduAuwPS4Ah3AB4Dbdj2KBGC1Dk4b8GEbtXs5fINzvusp8FRBknF15Br2xI1CqP0Qb/M4w I just didn't get yet what I'm doing wrong. Do you have any ideas? EDIT: One point I think is crucial: I have one Initialization Vector at my C# code, 16 bytes. This IV is not set at SQL symmetric key, could I do this? But even not filling the IV in C#, I get very different results, both in content and length.

    Read the article

  • Problem with Executing Mysql stored procedure

    - by karthik
    The stored procedure builds without any problem. The purpose of this is to take backup of selected tables to a script file. when the SP returns a value {Insert statements}. I am using the below MySql stored procedure, created by SQLWAYS [Tool to convert MsSql to MySql]. The actual MsSql SP is from http://www.codeproject.com/KB/database/InsertGeneratorPack.aspx When i execute the SP in MySql Query Browser, It says "Unknown column 'tbl_users' in 'field list'" What would be the problem ? Because there was no error when i build-ed this Converted MySql SP. Help.. DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`(v_tableName VARCHAR(100)) SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` WHERE table_name = v_tableName; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • How to return the value from MySql Stored Proc ??

    - by karthik
    I am using the below storedproc to generate the Insert statements of a specified table It is build-ed without any errors. Now i want to return the result set in "V_string" as output of the SP DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`() SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` -- WHERE table_name = v_tableName; WHERE table_name = 'tbl_users'; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • mysql changing delimiter

    - by jimsmith
    I'm trying to add this function using php myadmin, first off I get on error line 5, which is apparently because you need to change the delimiter from ; to something else so i tried this DELIMITER | CREATE FUNCTION LEVENSHTEIN (s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS INT DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = c_temp; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j+1, 1)), 16, 10) + 1; IF c > c_temp THEN SET c = c_temp; END IF; SET cv0 = CONCAT(cv0, UNHEX(HEX(c))), j = j + 1; END WHILE; SET cv1 = cv0, i = i + 1; END WHILE; END IF; RETURN c; END DELIMITER ; But I get this error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delimiter | Please help !?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Saturday, March 06, 2010

    CodePlex Daily Summary for Saturday, March 06, 2010New ProjectsAgr.CQRS: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. BigDays 2010: Big>Days 2010BizTalk - Controlled Admin: Hi .NET folks, I am planning to start project on a Controlled BizTalk Admin tool. This tool will be useful for the organizations which have "Sh...Blacklist of Providers: Blacklist of Providers - the application for department of warehouse logistics (warehouse) at firms.Career Vector: A job board software.Chargify Demo: This is a sample website for ChargifyConceptual: Concept description and animationEric Hexter: My publicly available source code and examplesFluentNHibernate.Search: A Fluent NHibernate.Search mapping interface for NHibernate provider implementation of Lucene.NET.FreelancePlanner: FreelancePlanner is a project tracking tool for freelance translators.HTMLx - JavaScript on the Server for .NET: HTMLx is a set of libraries based on ASP.NET engine to provide JavaScript programmability on the server side. It allows Web developers to use JavaS...IronMSBuild: IronMSBuild is a custom MSBuild Task, which allows you to execute IronRuby scripts. // have to provide some examples LINQ To Blippr: LINQ to Blippr is an open source LINQ Provider for the micro-reviewing service Blippr. LINQ to Blippr makes it easier and more efficent for develo...Luk@sh's HTML Parser: library that simplifies parsing of the HTML documents, for .NETMeta Choons: Unsure as yet but will be a kind of discogs type site but different..NetWork2: NetWork2Regular Expression Chooser: Simple gui for choosing the regular expressions that have become more than simple.See.Sharper: Hopefully useful C# extensions.SharePoint 2010 Toggle User Interface: Toggle the SharePoint 2010 user interface between the new SharePoint 2010 user interface and SharePoint 2007 user interface.Silverlight DiscussionBoard for SharePoint: This is a sharepoint 3.0 webpart that uses a silverlight treeview to display metadata about sharepoint discussions anduses the html bridge to show...Simple Sales Tracking CRM API Wrapper: The Simple Sales Tracking API Wrapper, enables easy extention development and integration with the hosted service at http://www.simplesalestracking...Syntax4Word: A syntax addin for word 2007.TortoiseHg installer builder: TortoiseHg and Mercurial installer builder for Windowsunbinder: Model un binding for route value dictionariesWindows Workflow Foundation on Codeplex: This site has previews of Workflow features which are released out of band for the purposes of adoption and feedback.XNA RSM Render State Manager: Render state management idea for XNA games. Enables isolation between draw calls whilst reducing DX9 SetRenderState calls to the minimum.New ReleasesAgr.CQRS: Sourcecode package: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. This dow...Book Cataloger: Preview 0.1.6a: New Features: Export to Word 2007 Bibliography format Dictionary list editors for Binding, Condition Improvements: Stability improved Content ...Braintree Client Library: Braintree-1.1.2: Includes minor enhancements to CreditCard and ValidationErrors to support upcoming example application.CassiniDev - Cassini 3.5 Developers Edition: CassiniDev v3.5.0.5: For usage see Readme.htm in download. New in CassiniDev v3.5.0.5 Reintroduced the Lib project and signed all Implemented the CassiniSqlFixture -...Composure: Calcium-64420-VS2010rc1.NET4.SL3: This is a simple conversion of Calcium (rev 64420) built in VS2010 RC1 against .NET4 and Silverlight 3. No source files were changed and ALL test...Composure: MS AJAX Library (46266) for VS2010 RC1 .NET4: This is a quick port of Microsoft's AJAX Library (rev 46266) for Visual Studio 2010 RC1 built against .NET 4.0. Since this conversion was thrown t...Composure: MS Web Test Lightweight for VS2010 RC1 .NET4: A simple conversion of Microsoft's Web Test Lightweight for Visual Studio 2010 RC1 .NET 4.0. This is part of a larger "special request" conversion...CoNatural Components: CoNatural Components 1.5: Supporting new data types: Added support for binary data types -> binary, varbinary, etc maps to byte[] Now supporting SQL Server 2008 new types ...Extensia: Extensia 2010-03-05: Extensia is a very large list of extension methods and a few helper types. Some extension methods are not practical (e.g. slow) whilst others are....Fluent Assertions: Fluent Assertions release 1.1: In this release, we've worked hard to add some important missing features that we really needed, and also improve resiliance against illegal argume...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.0 RC: Fluent Ribbon Control Suite 1.0 (Release Candidate)Includes: Fluent.dll (with .pdb and .xml, debug and release version) Showcase Application Sa...FluentNHibernate.Search: 0.1 Beta: First beta versionFolderSize: FolderSize.Win32.1.0.7.0: FolderSize.Win32.1.0.6.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Free Silverlight & WPF Chart Control - Visifire: Silverlight and WPF Step Line Chart: Hi, With this release Visifire introduces Step Line Chart. This release also contains fix for the following issues: * In WPF, if AnimatedUpd...Html to OpenXml: HtmlToOpenXml 1.0: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Line Counter: 1.5.1: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.1 Added outline icons and lin...Lokad Cloud - .NET O/C mapper (object to cloud) for Windows Azure: Lokad.Cloud v1.0.662.1: You can get the most recent release directly from the build server at http://build.lokad.com/distrib/Lokad.Cloud/Lost in Translation: LostInTranslation v0.2: Alpha release: function complete but not UX complete.MDownloader: MDownloader-0.15.7.56349: Supported large file resumption. Fixed minor bugs.Mini C# Lab: Mini CSharp Lab Ver 1.4: The primary new feature of Ver 1.4 is batch mode! Now you can run Mini C# Lab program as a scheduled task, no UI interactivity is needed. Here ar...Mobile Store: First drop: First droppatterns & practices SharePoint Guidance: SPG2010 Drop6: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Picasa Downloader: PicasaDownloader (41446): Changelog: Replaced some exception messages by a Summary dialog shown after downloading if there have been problems. Corrected the Portable vers...Pod Thrower: Version 1: This is the first release, I'm sure there are bugs, the tool is fully functional and I'm using it currently.PowerShell Provider BizTalk: BizTalkFactory PowerShell Provider - 1.1-snapshot: This release constitutes the latest development snapshot for the Provider. Please, leave feedback and use the Issue Tracker to help improve this pr...Resharper Settings Manager: RSM 1.2.1: This is a bug fix release. Changes Fixed plug-in crash when shared settings file was modified externally.Reusable Library Demo: Reusable Library Demo v1.0.2: A demonstration of reusable abstractions for enterprise application developerSharePoint 2010 Toggle User Interface: SharePoint Toggle User Interface: Release 1.0.0.0Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity(net3.5 MySQL) 1.0 Beta: MySQL VS 2008 EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowserTortoiseHg: TortoiseHg 1.0: http://bitbucket.org/tortoisehg/stable/wiki/ReleaseNotes Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before in...Visual Studio 2010 and Team Foundation Server 2010 VM Factory: Rangers Virtualization Guidance: Rangers Virtualization Guidance Focused guidance on creating a Rangers base image manually and introduction of PowerShell scripts to automate many ...Visual Studio DSite: Advanced Email Program (Visual Basic 2008): This email program can send email to any one using your email username and email credentials. The email program can also attatch attactments to you...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6: Several improvements and bug fixes have gone into the comment parsing code for the registers. The plug-in should now correctly pay attention to th...WSDLGenerator: WSDLGenerator 0.0.0.3: - Fixed SharePoint generated *.wsdl.aspx file - Added commandline option -wsdl which does only generate the wsdl file.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFluent AssertionsComposureDiffPlex - a .NET Diff Generator

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Download binary file From SQL Server 2000

    - by kareemsaad
    I inserted binary files (images, PDF, videos..) and I want to retrieve this file to download it. I used generic handler page as this public void ProcessRequest (HttpContext context) { using (System.Data.SqlClient.SqlConnection con = Connection.GetConnection()) { String Sql = "Select BinaryData From ProductsDownload Where Product_Id = @Product_Id"; SqlCommand com = new SqlCommand(Sql, con); com.CommandType = System.Data.CommandType.Text; com.Parameters.Add(Parameter.NewInt("@Product_Id", context.Request.QueryString["Product_Id"].ToString())); SqlDataReader dr = com.ExecuteReader(); if (dr.Read() && dr != null) { Byte[] bytes; bytes = Encoding.UTF8.GetBytes(String.Empty); bytes = (Byte[])dr["BinaryData"]; context.Response.BinaryWrite(bytes); dr.Close(); } } } and this is my table CREATE TABLE [ProductsDownload] ( [ID] [bigint] IDENTITY (1, 1) NOT NULL , [Product_Id] [int] NULL , [Type_Id] [int] NULL , [Name] [nvarchar] (200) COLLATE Arabic_CI_AS NULL , [MIME] [varchar] (50) COLLATE Arabic_CI_AS NULL , [BinaryData] [varbinary] (4000) NULL , [Description] [nvarchar] (500) COLLATE Arabic_CI_AS NULL , [Add_Date] [datetime] NULL , CONSTRAINT [PK_ProductsDownload] PRIMARY KEY CLUSTERED ( [ID] ) ON [PRIMARY] , CONSTRAINT [FK_ProductsDownload_DownloadTypes] FOREIGN KEY ( [Type_Id] ) REFERENCES [DownloadTypes] ( [ID] ) ON DELETE CASCADE ON UPDATE CASCADE , CONSTRAINT [FK_ProductsDownload_Product] FOREIGN KEY ( [Product_Id] ) REFERENCES [Product] ( [Product_Id] ) ON DELETE CASCADE ON UPDATE CASCADE ) ON [PRIMARY] GO And use data list has label for file name and button to download file as <asp:DataList ID="DataList5" runat="server" DataSource='<%#GetData(Convert.ToString(Eval("Product_Id")))%>' RepeatColumns="1" RepeatLayout="Flow"> <ItemTemplate> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td class="spc_tab_hed_bg spc_hed_txt lm5 tm2 bm3"> <asp:Label ID="LblType" runat="server" Text='<%# Eval("TypeName", "{0}") %>'></asp:Label> </td> <td width="380" class="spc_tab_hed_bg"> &nbsp; </td> </tr> <tr> <td align="left" class="lm5 tm2 bm3"> <asp:Label ID="LblData" runat="server" Text='<%# Eval("Name", "{0}") %>'></asp:Label> </td> <td align="center" class=" tm2 bm3"> <a href='<%# "DownloadFile.aspx?Product_Id=" + DataBinder.Eval(Container.DataItem,"Product_Id") %>' > <img src="images/downloads_ht.jpg" width="11" height="11" border="0" /> </a> <%--<asp:ImageButton ID="ImageButton1" ImageUrl="images/downloads_ht.jpg" runat="server" OnClick="ImageButton1_Click1" />--%> </td> </tr> </table> </ItemTemplate> </asp:DataList> I tried more to solve this problem but I cannot please if any one has solve for this proplem please sent me thank you kareem saad programmer MCTS,MCPD Toshiba Company Egypt

    Read the article

  • Prevent lazy loading in nHibernate

    - by Ciaran
    Hi, I'm storing some blobs in my database, so I have a Document table and a DocumentContent table. Document contains a filename, description etc and has a DocumentContent property. I have a Silverlight client, so I don't want to load up and send the DocumentContent to the client unless I explicity ask for it, but I'm having trouble doing this. I've read the blog post by Davy Brion. I have tried placing lazy=false in my config and removing the virtual access modifier but have had no luck with it as yet. Every time I do a Session.Get(id), the DocumentContent is retrieved via an outer join. I only want this property to be populated when I explicity join onto this table and ask for it. Any help is appreciated. My NHibernate mapping is as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Jrm.Model" namespace="Jrm.Model"> <class name="JrmDocument" lazy="false"> <id name="JrmDocumentID"> <generator class="native" /> </id> <property name="FileName"/> <property name="Description"/> <many-to-one name="DocumentContent" class="JrmDocumentContent" unique="true" column="JrmDocumentContentID" lazy="false"/> </class> <class name="JrmDocumentContent" lazy="false"> <id name="JrmDocumentContentID"> <generator class="native" /> </id> <property name="Content" type="BinaryBlob" lazy="false"> <column name="FileBytes" sql-type="varbinary(max)"/> </property> </class> </hibernate-mapping> and my classes are: [DataContract] public class JrmDocument : ModelBase { private int jrmDocumentID; private JrmDocumentContent documentContent; private long maxFileSize; private string fileName; private string description; public JrmDocument() { } public JrmDocument(string fileName, long maxFileSize) { DocumentContent = new JrmDocumentContent(File.ReadAllBytes(fileName)); FileName = new FileInfo(fileName).Name; } [DataMember] public virtual int JrmDocumentID { get { return jrmDocumentID; } set { jrmDocumentID = value; OnPropertyChanged("JrmDocumentID"); } } [DataMember] public JrmDocumentContent DocumentContent { get { return documentContent; } set { documentContent = value; OnPropertyChanged("DocumentContent"); } } [DataMember] public virtual long MaxFileSize { get { return maxFileSize; } set { maxFileSize = value; OnPropertyChanged("MaxFileSize"); } } [DataMember] public virtual string FileName { get { return fileName; } set { fileName = value; OnPropertyChanged("FileName"); } } [DataMember] public virtual string Description { get { return description; } set { description = value; OnPropertyChanged("Description"); } } } [DataContract] public class JrmDocumentContent : ModelBase { private int jrmDocumentContentID; private byte[] content; public JrmDocumentContent() { } public JrmDocumentContent(byte[] bytes) { Content = bytes; } [DataMember] public int JrmDocumentContentID { get { return jrmDocumentContentID; } set { jrmDocumentContentID = value; OnPropertyChanged("JrmDocumentContentID"); } } [DataMember] public byte[] Content { get { return content; } set { content = value; OnPropertyChanged("Content"); } } }

    Read the article

  • Problem converting MsSql to MySql Stored procedure

    - by karthik
    Original source of MsSql SP is here.. http://www.codeproject.com/KB/database/InsertGeneratorPack.aspx I am using the below MySql stored procedure, created by SQLWAYS [Tool to convert MsSql to MySql]. The purpose of this is to take backup of selected tables to a script file. when the SP returns a value {Insert statements}. When i Execute the Below SP, i am getting a weird Result Set : SQLWAYS_EVAL# ll(cast(UidSQLWAYS_EVAL# 0)),'0')+''','+SQLWAYS_EVAL# ll(UserNameSQLWAYS_EVAL# '+SQLWAYS_EVAL# ll(PasswordSQLWAYS_EVAL# '+ I see a lot of "SQLWAYS_EVAL#" in the code, which is produced in the result too. What values need to be passed instead of "SQLWAYS_EVAL#". So that i get the proper Insert statements for each record in the table. I am new to MySql. Please help me. Its Urgent. Thanks. DELIMITER $$ DROP PROCEDURE IF EXISTS `InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`() SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM information_schema.`columns` -- WHERE table_name = v_tableName; WHERE table_name = 'tbl_users'; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; select v_stringData; END $$ DELIMITER ;

    Read the article

< Previous Page | 1 2 3 4 5