Search Results

Search found 1047 results on 42 pages for 'locking'.

Page 9/42 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How to safely operate on parameters in threads, using C++ & Pthreads?

    - by ChrisCphDK
    Hi. I'm having some trouble with a program using pthreads, where occassional crashes occur, that could be related to how the threads operate on data So I have some basic questions about how to program using threads, and memory layout: Assume that a public class function performs some operations on some strings, and returns the result as a string. The prototype of the function could be like this: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); return result; } Is it correct to assume that strings, being dynamic, are stored on the heap, but that a reference to the string is allocated on the stack at runtime? Stack: [Some mem addr] pointer address to where the string is on the heap Heap: [Some mem addr] memory allocated for the initial string which may grow or shrink To make the function thread safe, the function is extended with the following mutex (which is declared as private in the "SomeClass") locking: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); return result; } Is this a safe way of locking down the operations being done on the strings (all three), or could a thread be stopped by the scheduler in the following cases, which I'd assume would mess up the intended logic: a. Right after the function is called, and the parameters: strOne & strTwo have been set in the two reference pointers that the function has on the stack, the scheduler takes away processing time for the thread and lets a new thread in, which overwrites the reference pointers to the function, which then again gets stopped by the scheduler, letting the first thread back in? b. Can the same occur with the "result" string: the first string builds the result, unlocks the mutex, but before returning the scheduler lets in another thread which performs all of it's work, overwriting the result etc. Or are the reference parameters / result string being pushed onto the stack while another thread is doing performing it's task? Is the safe / correct way of doing this in threads, and "returning" a result, to pass a reference to a string that will be filled with the result instead: void SomeClass::somefunc(const std::string &strOne, const std::string &strTwo, std::string result) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); } The intended logic is that several objects of the "SomeClass" class creates new threads and passes objects of themselves as parameters, and then calls the function: "someFunc": int SomeClass::startNewThread() { pthread_attr_t attr; pthread_t pThreadID; if(pthread_attr_init(&attr) != 0) return -1; if(pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED) != 0) return -2; if(pthread_create(&pThreadID, &attr, proxyThreadFunc, this) != 0) return -3; if(pthread_attr_destroy(&attr) != 0) return -4; return 0; } void* proxyThreadFunc(void* someClassObjPtr) { return static_cast<SomeClass*> (someClassObjPtr)->somefunc("long string","long string"); } Sorry for the long description. But I hope the questions and intended purpose is clear, if not let me know and I'll elaborate. Best regards. Chris

    Read the article

  • FileStream to save file then immediately unlock in C# ?

    - by aron
    Hello, I have this code that saves a pdf file. FileStream fs = new FileStream(SaveLocation, FileMode.Create); fs.Write(result.DocumentBytes, 0, result.DocumentBytes.Length); fs.Flush(); fs.Close(); It works fine. However sometimes it does not release the lock right away and that causes file locking exceptions with functions run after this one run. Is there a ideal way to release the file lock right after the fs.Close() thanks!

    Read the article

  • ZooKeeper and RabbitMQ/Qpid together - overkill or a good combination?

    - by Chris Sears
    Greetings, I'm evaluating some components for a multi-data center distributed system. We're going to be using message queues (via either RabbitMQ or Qpid) so agents can make asynchronous requests to other agents without worrying about addressing, routing, load balancing or retransmission. In many cases, the agents will be interacting with components that were not designed for highly concurrent access, so locking and cross-agent coordination will be needed to avoid race conditions. Also, we'd like the system to automatically respond to agent or data center failures. With the above use cases in mind, ZooKeeper seemed like it might be a good fit. But I'm wondering if trying to use both ZK and message queuing is overkill. It seems like what Zookeeper does could be accomplished by my own cluster manager using AMQP messaging, but that would be hard to get really right. On the other hand, I've seen some examples where ZooKeeper was used to implement message queuing, but I think RabbitMQ/Qpid are a more natural fit for that. Has anyone out there used a combination like this? Thanks in advance, -Chris

    Read the article

  • Is it safe to lock a static variable in a non-static class?

    - by Dario Solera
    I've got a class that manages a shared resource. Now, since access to the resource depends on many parameters, this class is instantiated and disposed several times during the normal execution of the program. The shared resource does not support concurrency, so some kind of locking is needed. The first thing that came into my mind is having a static instance in the class, and acquire locks on it, like this: // This thing is static! static readonly object MyLock = new object(); // This thing is NOT static! MyResource _resource = ...; public DoSomeWork() { lock(MyLock) { _resource.Access(); } } Does that make sense, or would you use another approach?

    Read the article

  • Are we asking too much of transactional memory?

    - by Carl Seleborg
    I've been reading up a lot about transactional memory lately. There is a bit of hype around TM, so a lot of people are enthusiastic about it, and it does provide solutions for painful problems with locking, but you regularly also see complaints: You can't do I/O You have to write your atomic sections so they can run several times (be careful with your local variables!) Software transactional memory offers poor performance [Insert your pet peeve here] I understand these concerns: more often than not, you find articles about STMs that only run on some particular hardware that supports some really nifty atomic operation (like LL/SC), or it has to be supported by some imaginary compiler, or it requires that all accesses to memory be transactional, it introduces type constraints monad-style, etc. And above all: these are real problems. This has lead me to ask myself: what speaks against local use of transactional memory as a replacement for locks? Would this already bring enough value, or must transactional memory be used all over the place if used at all?

    Read the article

  • Must all Concurrent Data Store (CDB) locks be explicitly released when closing a Berkeley DB?

    - by Steve Emmerson
    I have an application that comprises multiple processes each accessing a single Berkeley DB Concurrent Data Store (CDB) database. Each process is single-threaded and does no explicit locking of the database. When each process terminates normally, it calls DB-close() and DB_ENV-close(). When all processes have terminated, there should be no locks on the database. Episodically, however, the database behaves as if some process was holding a write-lock on it even though all processes have terminated normally. Does each process need to explicitly release all locks before calling DB_ENV-close()? If so, how does the process obtain the "locker" parameter for the call to DB_ENV-loc_vec()?

    Read the article

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • hibernate versioning parent entity

    - by Priit
    Consider two entities Parent and Child. Child is part of Parent's transient collection Child has a ManyToOne mapping to parent with FetchType.LAZY Both are displayed on the same form to a user. When user saves the data we first update Parent instance and then Child collection (both using merge). Now comes the tricky part. When user modifies only Child property on the form then hibernate dirty checking does not update Parent instance and thus does not increase optimistic locking version number for that entity. I would like to see situation where only Parent is versioned and every time I call merge for Parent then version is always updated even if actual update is not executed in db.

    Read the article

  • Lock thread using somthing other than a object

    - by Scott Chamberlain
    when using a lock does the thing you are locking on have to be a object. For example is this legal static DateTime NextCleanup = DateTime.Now; const TimeSpan CleanupInterval = new TimeSpan(1, 0, 0); private static void DoCleanup() { lock ((object)NextCleanup) { if (NextCleanup < DateTime.Now) { NextCleanup = DateTime.Now.Add(CleanupInterval); System.Threading.ThreadPool.QueueUserWorkItem(new System.Threading.WaitCallback(cleanupThread)); } } return; } EDIT-- From reading SLaks' responce I know the above code would be not valid but would this be? static MyClass myClass = new MyClass(); private static void DoCleanup() { lock (myClass) { // } return; }

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • Mixing synchronized() with ReentrantLock.lock()

    - by yarvin
    In Java, do ReentrantLock.lock() and ReetrantLock.unlock() use the same locking mechanism as synchronized()? My guess is "No," but I'm hoping to be wrong. Example: Imagine that Thread 1 and Thread 2 both have access to: ReentrantLock lock = new ReentrantLock(); Thread 1 runs: synchronized (lock) { // blah } Thread 2 runs: lock.lock(); try { // blah } finally { lock.unlock(); } Assume Thread 1 reaches its part first, then Thread 2 before Thread 1 is finished: will Thread 2 wait for Thread 1 to leave the synchronized() block, or will it go ahead and run?

    Read the article

  • Mysql SELECT FOR UPDATE - strange issue

    - by Michal Fronczyk
    Hi, I have a strange issue (at least for me :)) with the MySQL's locking facility. I have a table: Create Table: CREATE TABLE test ( id int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=latin1 With this data: +----+ | id | +----+ | 3 | | 4 | | 5 | | 6 | | 7 | | 8 | | 10 | | 11 | | 12 | +----+ Now I have 2 clients with these commands executed at the beginning: set autocommit=0; set session transaction isolation level serializable; begin; Now the most interesting part. The first client executes this query: (makes an intent to insert a row with id equal to 9) SELECT * from test where id = 9 FOR UPDATE; Empty set (0.00 sec) Then the second client does the same: SELECT * from test where id = 9 FOR UPDATE; Empty set (0.00 sec) My question is: Why the second client does not block ? An exclusive gap lock should have been set by the first query because FOR UPDATE have been used and the second client should block. If I am wrong, could somebody tell me how to do it correctly ? The MySql version I use is: 5.1.37-1ubuntu5.1 Thanks, Michal

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • SQL Server 2008 Running trigger after Insert, Update locks original table

    - by Polity
    Hi Folks, I have a serious performance problem. I have a database with (related to this problem), 2 tables. 1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word. The validity of the data in the second table is of less important then the validity of the data in the first table. Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables). I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds. On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time). The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways). I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed. What do you think, if i'm right, is there a easy way around this? Thanks in advance! Cheers, Koen

    Read the article

  • NHibernate flush should save only dirty objects

    - by Emilian
    Why NHibernate fires an update on firstOrder when saving secondOrder in the code below? I'm using optimistic locking on Order. Is there a way to tell NHibernate to update firstOrder when saving secondOrder only if firstOrder was modified? // Configure var cfg = new Configuration(); var configFile = Path.Combine( AppDomain.CurrentDomain.BaseDirectory, "NHibernate.MySQL.config"); cfg.Configure(configFile); // Create session factory var sessionFactory = cfg.BuildSessionFactory(); // Create session var session = sessionFactory.OpenSession(); // Set session to flush on transaction commit session.FlushMode = FlushMode.Commit; // Create first order var firstOrder = new Order(); var firstOrder_OrderLine = new OrderLine { ProductName = "Bicycle", ProductPrice = 120.00M, Quantity = 1 }; firstOrder.Add(firstOrder_OrderLine); // Save first order using (var tx = session.BeginTransaction()) { try { session.Save(firstOrder); tx.Commit(); } catch { tx.Rollback(); } } // Create second order var secondOrder = new Order(); var secondOrder_OrderLine = new OrderLine { ProductName = "Hat", ProductPrice = 12.00M, Quantity = 1 }; secondOrder.Add(secondOrder_OrderLine); // Save second order using (var tx = session.BeginTransaction()) { try { session.Save(secondOrder); tx.Commit(); } catch { tx.Rollback(); } } session.Close(); sessionFactory.Close();

    Read the article

  • SQL Server lock/hang issue

    - by mattwoberts
    Hi, I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up. I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD. Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue. ALTER PROCEDURE [dbo].[MostPopularRead] AS BEGIN SET NOCOUNT ON; SELECT c.ForeignId , ct.ContentSource as ContentSource , sum(ch.HitCount * hw.Weight) as Popularity , (sum(ch.HitCount * hw.Weight) * 100) / @Total as Percent , @Total as TotalHits from ContentHit ch WITH (NOLOCK) join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId where ch.CreatedDate between @Then and @Now group by c.ForeignId , ct.ContentSource order by sum(ch.HitCount * hw.HitWeightMultiplier) desc END The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert: BEGIN TRAN insert into [ContentHit] (ContentId, HitCount, HitWeightId, ContentHitComment) values (@ContentId, isnull(@HitCount,1), isnull(@HitWeightId,1), @ContentHitComment) COMMIT TRAN The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select. When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs. Can someone please advise how best I can solve this problem? I don't care about reading dirty data... Thanks

    Read the article

  • Can I add a condition to CakePHP's update statement?

    - by Don Kirkby
    Since there doesn't seem to be any support for optimistic locking in CakePHP, I'm taking a stab at building a behaviour that implements it. After a little research into behaviours, I think I could run a query in the beforeSave event to check that the version field hasn't changed. However, I'd rather implement the check by changing the update statement's WHERE clause from WHERE id = ? to WHERE id = ? and version = ? This way I don't have to worry about other requests changing the database record between the time I read the version and the time I execute the update. It also means I can do one database call instead of two. I can see that the DboSource.update() method supports conditions, but Model.save() never passes any conditions to it. It seems like I have a couple of options: Do the check in beforeSave() and live with the fact that it's not bulletproof. Hack my local copy of CakePHP to check for a conditions key in the options array of Model.save() and pass it along to the DboSource.update() method. Right now, I'm leaning in favour of the second option, but that means I can't share my behaviour with other users unless they apply my hack to their framework. Have I missed an easier option?

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • MFC/CCriticalSection: Simple lock situation hangs

    - by raph.amiard
    I have to program a simple threaded program with MFC/C++ for a uni assignment. I have a simple scenario in wich i have a worked thread which executes a function along the lines of : UINT createSchedules(LPVOID param) { genProgThreadVal* v = (genProgThreadVal*) param; // v->searcherLock is of type CcriticalSection* while(1) { if(v->searcherLock->Lock()) { //do the stuff, access shared object , exit clause etc.. v->searcherLock->Unlock(); } } PostMessage(v->hwnd, WM_USER_THREAD_FINISHED , 0,0); delete v; return 0; } In my main UI class, i have a CListControl that i want to be able to access the shared object (of type std::List). Hence the locking stuff. So this CList has an handler function looking like this : void Ccreationprogramme::OnLvnItemchangedList5(NMHDR *pNMHDR, LRESULT *pResult) { LPNMLISTVIEW pNMLV = reinterpret_cast<LPNMLISTVIEW>(pNMHDR); if((pNMLV->uChanged & LVIF_STATE) && (pNMLV->uNewState & LVNI_SELECTED)) { searcherLock.Lock(); // do the stuff on shared object searcherLock.Unlock(); // do some more stuff } *pResult = 0; } The searcherLock in both function is the same object. The worker thread function is passed a pointer to the CCriticalSection object, which is a member of my dialog class. Everything works but, as soon as i do click on my list, and so triggers the handler function, the whole program hangs indefinitely.I tried using a Cmutex. I tried using a CSingleLock wrapping over the critical section object, and none of this has worked. What am i missing ?

    Read the article

  • C#, Can I check on a lock without trying to acquire it?

    - by Biff MaGriff
    Hello, I have a lock in my c# web app that prevents users from running the update script once it has started. I was thinking I would put a notification in my master page to let the user know that the data isn't all there yet. Currently I do my locking like so. protected void butRefreshData_Click(object sender, EventArgs e) { Thread t = new Thread(new ParameterizedThreadStart(UpdateDatabase)); t.Start(this); //sleep for a bit to ensure that javascript has a chance to get rendered Thread.Sleep(100); } public static void UpdateDatabase(object con) { if (Monitor.TryEnter(myLock)) { Updater.RepopulateDatabase(); Monitor.Exit(myLock); } else { Common.RegisterStartupScript(con, AlreadyLockedJavaScript); } } And I do not want to do if(Monitor.TryEnter(myLock)) Monitor.Exit(myLock); else //show processing labal As I imagine there is a slight possibility that it might display the notification when it isn't actually running. Is there an alternative I can use? Edit: Hi Everyone, thanks a lot for your suggestions! Unfortunately I couldn't quite get them to work... However I combined the ideas on 2 answers and came up with my own solution.

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • NHibernate mapping with optimistic-lock="version" and dynamic-update="true" is generating invalid up

    - by SteveBering
    I have an entity "Group" with an assigned ID which is added to an aggregate in order to persist it. This causes an issue because NHibernate can't tell if it is new or existing. To remedy this issue, I changed the mapping to make the Group entity use optimistic locking on a sql timestamp version column. This caused a new issue. Group has a bag of sub objects. So when NHibernate flushes a new group to the database, it first creates the Group record in the Groups table, then inserts each of the sub objects, then does an update of the Group records to update the timestamp value. However, the sql that is generated to complete the update is invalid when the mapping is both dynamic-update="true" and optimistic-lock="version". Here is the mapping: <class xmlns="urn:nhibernate-mapping-2.2" dynamic-update="true" mutable="true" optimistic-lock="version" name="Group" table="Groups"> <id name="GroupNumber" type="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="GroupNumber" length="5" /> <generator class="assigned" /> </id> <version generated="always" name="Timestamp" type="BinaryBlob" unsaved-value="null"> <column name="TS" not-null="false" sql-type="timestamp" /> </version> <property name="UID" update="false" type="System.Guid, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="GroupUID" unique="true" /> </property> <property name="Description" type="AnsiString"> <column name="GroupDescription" length="25" not-null="true" /> </property> <bag access="field.camelcase-underscore" cascade="all" inverse="true" lazy="true" name="Assignments" mutable="true" order-by="GroupAssignAssignment"> <key foreign-key="fk_Group_Assignments"> <column name="GroupNumber" /> </key> <one-to-many class="Assignment" /> </bag> <many-to-one class="Aggregate" name="Aggregate"> <column name="GroupParentID" not-null="true" /> </many-to-one> </class> </hibernate-mapping> When the mapping includes both the dynamic update and the optimistic lock, the sql generated is: UPDATE groups SET WHERE GroupNumber = 11111 AND TS=0x00000007877 This is obviously invalid as there are no SET statements. If I remove the dynamic update part, everything gets updated during this update statement instead. This makes the statement valid, but rather unnecessary. Has anyone seen this issue before? Am I missing something? Thanks, Steve

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >