Search Results

Search found 91 results on 4 pages for 'toan tran'.

Page 1/4 | 1 2 3 4  | Next Page >

  • What does really happen when we do a BEGIN TRAN in SQL Server 2005?

    - by Misnomer
    Hi all, I came across this issue or maybe something I didn't realize but I did a Begin Tran and had some code inside it and never ran a commit or rollback as I forgot about it. That caused all many of the database queries or even just a simple select top 1000 command were just sitting on loading..? Now it probably has put some locks on the tables I guess since it did not let me query them..but I just wanted to know what exactly happened and what are the practices to be followed here ?

    Read the article

  • How to avoid using duplicate savepoint names in nested transactions in nested stored procs?

    - by Gary McGill
    I have a pattern that I almost always follow, where if I need to wrap up an operation in a transaction, I do this: BEGIN TRANSACTION SAVE TRANSACTION TX -- Stuff IF @error <> 0 ROLLBACK TRANSACTION TX COMMIT TRANSACTION That's served me well enough in the past, but after years of using this pattern (and copy-pasting the above code), I've suddenly discovered a flaw which comes as a complete shock. Quite often, I'll have a stored procedure calling other stored procedures, all of which use this same pattern. What I've discovered (to my cost) is that because I'm using the same savepoint name everywhere, I can get into a situation where my outer transaction is partially committed - precisely the opposite of the atomicity that I'm trying to achieve. I've put together an example that exhibits the problem. This is a single batch (no nested stored procs), and so it looks a little odd in that you probably wouldn't use the same savepoint name twice in the same batch, but my real-world scenario would be too confusing to post. CREATE TABLE Test (test INTEGER NOT NULL) BEGIN TRAN SAVE TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (1) COMMIT TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (2) COMMIT TRAN TX DELETE FROM Test ROLLBACK TRAN TX COMMIT TRAN TX SELECT * FROM Test DROP TABLE Test When I execute this, it lists one record, with value "1". In other words, even though I rolled back my outer transaction, a record was added to the table. What's happening is that the ROLLBACK TRANSACTION TX at the outer level is rolling back as far as the last SAVE TRANSACTION TX at the inner level. Now that I write this all out, I can see the logic behind it: the server is looking back through the log file, treating it as a linear stream of transactions; it doesn't understand the nesting/hierarchy implied by either the nesting of the transactions (or, in my real-world scenario, by the calls to other stored procedures). So, clearly, I need to start using unique savepoint names instead of blindly using "TX" everywhere. But - and this is where I finally get to the point - is there a way to do this in a copy-pastable way so that I can still use the same code everywhere? Can I auto-generate the savepoint name on the fly somehow? Is there a convention or best-practice for doing this sort of thing? It's not exactly hard to come up with a unique name every time you start a transaction (could base it off the SP name, or somesuch), but I do worry that eventually there would be a conflict - and you wouldn't know about it because rather than causing an error it just silently destroys your data... :-(

    Read the article

  • What is the scope of TRANSACTION in Sql server

    - by Shantanu Gupta
    I was creating a stored procedure and i got stuck in the writing methodology of me and my collegue. I am using SQL Server 2005 I was writing Stored procedure like this BEGIN TRAN BEGIN TRY INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH And my collegue was writing it like the below one BEGIN TRY BEGIN TRAN INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH Here the only difference that you will find is the position of writing Begin TRAN. According to me the methodology of my collegue should not work when an exception occurs i.e. Rollback should not get executed because TRAN does'nt have scope. But when i tried to run both the code, both was working in the same way. I am confused to know how does TRANSACTION works. Is it scope free or what ?

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • ASP.NET, C#: timeout when trying to Transaction.Commit() to database; potential deadlock?

    - by user1843921
    I have a web page that has coding structured somewhat as follows: SqlConnection conX =new SqlConnection(blablabla); conX.Open(); SqlTransaction tran=conX.BeginTransaction(); try{ SqlCommand cmdInsert =new SqlCommand("INSERT INTO Table1(ColX,ColY) VALUES @x,@y",conX); cmdInsert.Transaction=tran; cmdInsert.ExecuteNonQuery(); SqlCommand cmdSelect=new SqlCOmmand("SELECT * FROM Table1",conX); cmdSelect.Transaction=tran; SqlDataReader dtr=cmdSelect.ExecuteReader(); //read stuff from dtr dtr.Close(); cmdInsert=new SqlCommand("UPDATE Table2 set ColA=@a",conX); cmdInsert.Transaction=tran; cmdInsert.ExecuteNonQuery(); //display MiscMessage tran.Commit(); //display SuccessMessage } catch(Exception x) { tran.Rollback(); //display x.Message } finally { conX.Close(); } So, everything seems to work until MiscMessage. Then, after a while (maybe 15-ish seconds?) x.Message pops up, saying that: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." So something wrong with my trans.Commit()? The database is not updated so I assume the trans.Rollback works... I have read that deadlocks can cause timeouts...is this problem cause by my SELECT statement selecting from Table1, which is being used by the first INSERT statement? If so, what should I do? If that ain't the problem, what is?

    Read the article

  • Don't see job schedule added by sp_add_jobschedule in SQL Mgmt UI

    - by Ariel
    I'm running a script like below on a SQL Server box and, even though it finishes correctly, then when, on SQL Mgmt UI, I right click on that job's properties, go to Schedules, I cannot see the schedule just added... what am I missing? (I'm using the right job_name param, etc) thanks! BEGIN TRY BEGIN TRAN EXEC msdb.dbo.sp_add_jobschedule @job_name = 'Job name', @name=N'Job schedule name', @enabled = 0, @freq_type=1, @active_start_date=20100525, @active_start_time=60000 COMMIT TRAN END TRY BEGIN CATCH SELECT ERROR_Message(), ERROR_Line(); ROLLBACK TRAN END CATCH

    Read the article

  • Don't see job schedule in SQL Mgmt UI added by sp_add_jobschedule

    - by Ariel
    I'm running a script like below on a SQL Server box and, even though it finishes correctly, then when, on SQL Mgmt UI, I right click on that job's properties, go to Schedules, I cannot see the schedule just added... what am I missing? (I'm using the right job_name param, etc) thanks! BEGIN TRY BEGIN TRAN EXEC msdb.dbo.sp_add_jobschedule @job_name = 'Job name', @name=N'Job schedule name', @enabled = 0, @freq_type=1, @active_start_date=20100525, @active_start_time=60000 COMMIT TRAN END TRY BEGIN CATCH SELECT ERROR_Message(), ERROR_Line(); ROLLBACK TRAN END CATCH

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • How to adjust wallpaper to make it fit the screen, for netbook using window 7 starter?

    - by Toan Tran
    I have a netbook, with window 7 starter. I was trying to change the wallpaper, and found this application: John's Background Switcher which said that I can make slide show of themes, I guess it's as you set wallpaper with window 7 ultimate. The default wallpaper was still fine until downloading that application. After trying it, error occurred, I couldnt change the background, so I closed it, but then the default wallpaper changed its size, the central image dropped out of screen, can see aaprt of it on `bottom right hand corner. I tried to adjust the image into center, but it didnt work when i right click on desktop - Graphic option - panel fit - "center image". Anybody can help me? how to adjust it back to normal as original?

    Read the article

  • Using old code on new version of Visual Studio [migrated]

    - by Tu Tran
    I have a project which was started from 90s in C/C++. Therefore, it contains many old coding styles such as K&R-style function declaration, obsolete function, ... The project works fine in Visual Studio 2008, but now I want to use it in the new version of Visual Studio (specifically VS 2010) because we have other projects in Visual Studio 2010/2012. I don't want to have too many versions of Visual Studio on my machine. When I try to compile the old project, Visual Studio throws too many errors. I can fix all of them but I am scared to edit the source code and I want other people to be able to pen it in the old version of VS too. I want the project to remain backwards compatible with VS. My question is how to use the old code in Visual Studio 2010/2012 without changing the code. Or if necessary how do I just fix a few lines of code, but make sure it won't cause an error if someone else opens that code in the older version of VS. Is there a way to tell newer Visual Studio versions to use older compiler flags or something like that?

    Read the article

  • Rewrite in Mediawiki, remove index.php, htaccess

    - by tran cuong
    I've just installed Mediawiki on Apache and I want the url should be localhost/Main_Page/ localhost/Special:Recent_Changes ... instead of localhost/index.php/Main_Page/ localhost/index.php/Special:Recent_Changes I've tried many times and in many ways but it still doesn't work. Any suggest for a "exactly" what to do, step by step. MediaWiki docs didn't talk about .htaccess. It had only nginx and lighttpd.

    Read the article

  • You don't have permission to access /index.php on this server

    - by Tran Dinh Thoai
    I made a 'login with OpenID' page and I had a error when OpenID provider return to my page: You don't have permission to access /index.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. If I remove parameters which are returned by OpenID provider, the page run well. How can I fix this problem? The login page that cause error is: http://bryox.com/login

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • Return if remote stored procedure fails

    - by njk
    I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE]. USE [LOCAL] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[monthlyRollUp] AS SET NOCOUNT, XACT_ABORT ON BEGIN TRY EXEC [REOMTE].[DB].[table].[sp] --This transaction should only begin if the remote procedure does not fail BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp1] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp2] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp3] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp4] COMMIT END TRY BEGIN CATCH -- Insert error into log table INSERT INTO [dbo].[log_table] (stamp, errorNumber, errorSeverity, errorState, errorProcedure, errorLine, errorMessage) SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() END CATCH GO When using a transaction on the remote procedure, it throws this error: OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.". I get that I'm unable to run a transaction locally for a remote procedure. How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?

    Read the article

  • Regarding Unix Move Command

    - by user38993
    I need to write an Unix Shell Script tran.sh that moves the csv input files from /exp/files folder to /exp/ready directory. The csv input files are written to /exp/files folder by an FTP server whose behavior I cannot trivially change. In tran.sh shell script I need to ensure before doing a move of that csv input file from /exp/files directory no longer any other process is writing to the file. How can I do it.

    Read the article

  • SQl Server: serializable level not working

    - by Zé Carlos
    I have the following SP: CREATE PROCEDURE [dbo].[sp_LockReader] AS BEGIN SET NOCOUNT ON; begin try set transaction isolation level serializable begin tran select * from teste commit tran end try begin catch rollback tran set transaction isolation level READ COMMITTED end catch set transaction isolation level READ COMMITTED END The table "test" has many values, so "select * from teste" takes several seconds. I run the sp_LockReader at same time in two diferent query windows and the second one starts showing test table contents without the first one terminates. Shouldn't serializeble level forces the second query to wait? How do i get the described behaviour? Thanks

    Read the article

  • SQL SERVER – Stored Procedure and Transactions

    - by pinaldave
    I just overheard the following statement – “I do not use Transactions in SQL as I use Stored Procedure“. I just realized that there are so many misconceptions about this subject. Transactions has nothing to do with Stored Procedures. Let me demonstrate that with a simple example. USE tempdb GO -- Create 3 Test Tables CREATE TABLE TABLE1 (ID INT); CREATE TABLE TABLE2 (ID INT); CREATE TABLE TABLE3 (ID INT); GO -- Create SP CREATE PROCEDURE TestSP AS INSERT INTO TABLE1 (ID) VALUES (1) INSERT INTO TABLE2 (ID) VALUES ('a') INSERT INTO TABLE3 (ID) VALUES (3) GO -- Execute SP -- SP will error out EXEC TestSP GO -- Check the Values in Table SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO Now, the main point is: If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Let’s see the result very quickly. It is very clear that there were entries in table1 which are not shown in the subsequent tables. If SP was transactional in terms of T-SQL Query Batches, there would be no entries in any of the tables. If you want to use Transactions with Stored Procedure, wrap the code around with BEGIN TRAN and COMMIT TRAN. The example is as following. CREATE PROCEDURE TestSPTran AS BEGIN TRAN INSERT INTO TABLE1 (ID) VALUES (11) INSERT INTO TABLE2 (ID) VALUES ('b') INSERT INTO TABLE3 (ID) VALUES (33) COMMIT GO -- Execute SP EXEC TestSPTran GO -- Check the Values in Tables SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO -- Clean up DROP TABLE Table1 DROP TABLE Table2 DROP TABLE Table3 GO In this case, there will be no entries in any part of the table. What is your opinion about this blog post? Please leave your comments about it here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Rollback in Oracle and SQL Server

    - by CatherineRussell
    I have an Oracle background. It was interesting to see how rollback handled in Oracle and SQL Server. There is no begin trans in Oracle.  What oracle does is it will store the data in a temporary area called the rollback segments. Untill your issue the commit command the records will be kept there. You can even rollback your update statement by issuing the rollback command. When you issue the commit command the records in the rollback segments are written to the redo log files. The same logic for insert is also applicable except that there is no mirror image of the record kept.   In SQL Server, if you want to be able to roll back statement, you neet to start your statement with a "begin tran" . Then, you can rollback a transaction, if this is needed. begin tran update Person set FirstName = 'Arthur' where PersonId = 10 -- select firstname from Person rollback

    Read the article

  • SQL Server 2008: FileStream Insertion Failure w/ .NET 3.5SP1

    - by James Alexander
    I've configured a db w/ a FileStream group and have a table w/ File type on it. When attempting to insert a streamed file and after I create the table row, my query to read the filepath out and the buffer returns a null file path. I can't seem to figure out why though. Here is the table creation script: /****** Object: Table [dbo].[JobInstanceFile] Script Date: 03/22/2010 18:05:36 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[JobInstanceFile]( [JobInstanceFileId] [int] IDENTITY(1,1) NOT NULL, [JobInstanceId] [int] NOT NULL, [File] [varbinary](max) FILESTREAM NULL, [FileId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Created] [datetime] NOT NULL, CONSTRAINT [PK_JobInstanceFile] PRIMARY KEY CLUSTERED ( [JobInstanceFileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup], UNIQUE NONCLUSTERED ( [FileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[JobInstanceFile] ADD DEFAULT (newid()) FOR [FileId] GO Here's my proc I call to create the row before streaming the file: /****** Object: StoredProcedure [dbo].[JobInstanceFileCreate] Script Date: 03/22/2010 18:06:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO create proc [dbo].[JobInstanceFileCreate] @JobInstanceId int, @Created datetime as insert into JobInstanceFile (JobInstanceId, FileId, Created) values (@JobInstanceId, newid(), @Created) select scope_identity() GO And lastly, here's the code I'm using: public int CreateJobInstanceFile(int jobInstanceId, string filePath) { using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["ConsumerMarketingStoreFiles"].ConnectionString)) using (var fileStream = new FileStream(filePath, FileMode.Open)) { connection.Open(); var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); try { //create the JobInstanceFile instance var command = new SqlCommand("JobInstanceFileCreate", connection) { Transaction = tran }; command.CommandType = CommandType.StoredProcedure; command.Parameters.AddWithValue("@JobInstanceId", jobInstanceId); command.Parameters.AddWithValue("@Created", DateTime.Now); int jobInstanceFileId = Convert.ToInt32(command.ExecuteScalar()); //read out the filestream transaction context to stream the file for storage command.CommandText = "select [File].PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() from JobInstanceFile where JobInstanceFileId = @JobInstanceFileId"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@JobInstanceFileId", jobInstanceFileId); using (SqlDataReader dr = command.ExecuteReader()) { dr.Read(); //get the file path we're writing out to string writePath = dr.GetString(0); using (var writeStream = new SqlFileStream(writePath, (byte[])dr.GetValue(1), FileAccess.ReadWrite)) { //copy from one stream to another byte[] bytes = new byte[65536]; int numBytes; while ((numBytes = fileStream.Read(bytes, 0, 65536)) 0) writeStream.Write(bytes, 0, numBytes); } } tran.Commit(); return jobInstanceFileId; } catch (Exception e) { tran.Rollback(); throw e; } } } Can someone please let me know what I'm doing wrong. In the code, the following expression is returning null for the file path and shouldn't be: //get the file path we're writing out to string writePath = dr.GetString(0); The server is different then the computer the code is running on but the necessary shares appear to be in order and I have also run the following: EXEC sp_configure filestream_access_level, 2 Any help would be greatly appreciated. Thanks!

    Read the article

  • Where does a Server trigger save in SQL Server?

    - by Mostafa
    A couple of days ago , I was practicing and I wrote some triggers like these : create trigger trg_preventionDrop on all server for drop_database as print 'Can not Drop Database' rollback tran go create trigger trg_preventDeleteTable on database for drop_table as print 'you can not delete any table' rollback tran But the problem is I don't know where it has saved and How can I delete or edit those. Thank you

    Read the article

  • Rollback SQL Server 2012 Sequence

    - by VAAA
    I have a SQL Server 2012 Sequence object: /****** Create Sequence Object ******/ CREATE SEQUENCE TestSeq START WITH 1 INCREMENT BY 1; I have a SP that runs some queries inside a transaction: BEGIN TRAN SELECT NEXT VALUE FOR dbo.TestSeq <here all the query update code......> ROLLBACK TRAN If the transaction fails all the updates are rolledback without problem but the Sequence is not rolled back I guess because Its out of the scope of the transaction. Any clue on way to handle that? Thanks

    Read the article

1 2 3 4  | Next Page >