Search Results

Search found 1047 results on 42 pages for 'locking'.

Page 2/42 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SQLite file locking and DropBox

    - by Alex Jenter
    I'm developing an app in Visual C++ that uses an SQLite3 DB for storing data. Usually it sits in the tray most of the time. I also would like to enable putting my app in a DropBox folder to share it across several PCs. It worked really well up until DropBox has recently updated itself. And now it says that it "can't sync the file in use". The SQLite file is open in my app, but the lock is shared. There are some prepared statements, but all are reset immediately after using step. Is there any way to enable synchronizing of an open SQLite database file? Thanks! Here is the simple wrapper that I use just for testing (no error handling), in case this helps: class Statement { private: Statement(sqlite3* db, const std::wstring& sql) : db(db) { sqlite3_prepare16_v2(db, sql.c_str(), sql.length() * sizeof(wchar_t), &stmt, NULL); } public: ~Statement() { sqlite3_finalize(stmt); } public: void reset() { sqlite3_reset(stmt); } int step() { return sqlite3_step(stmt); } int getInt(int i) const { return sqlite3_column_int(stmt, i); } tstring getText(int i) const { const wchar_t* v = (const wchar_t*)sqlite3_column_text16(stmt, i); int sz = sqlite3_column_bytes16(stmt, i) / sizeof(wchar_t); return std::wstring(v, v + sz); } private: friend class Database; sqlite3* db; sqlite3_stmt* stmt; }; class Database { public: Database(const std::wstring& filename = L"")) : db(NULL) { sqlite3_open16(filename.c_str(), &db); } ~Database() { sqlite3_close(db); } void exec(const std::wstring& sql) { auto_ptr<Statement> st(prepare(sql)); st->step(); } auto_ptr<Statement> prepare(const tstring& sql) const { return auto_ptr<Statement>(new Statement(db, sql)); } private: sqlite3* db; };

    Read the article

  • Column locking in innodb?

    - by ming yeow
    I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running even in a few hours, while the second one completes smoothly. the only difference is the type_id between the 2 queries. a bit of background, the first statement screwed up before which i had to kill manually. Thanks in advance for your help - i have an urgent data job to finish, and this problem is driving me crazy

    Read the article

  • Locking on an object...

    - by Mystere Man
    I often see code like that which is shown here, ie where an object is allocated and then used as a "lock object". It seems to me that you could use any object for this, including the event itself as the lock object. Why allocate a new object that does nothing? My understanding is that calling lock() on an object doesn't actually alter the object itself, nor does it actually lock it from being used, it's simply used as a placeholder for multiple lock statements to anchor on. So my question is, is this really a good thing to do?

    Read the article

  • Locking behaviour is different via network shares

    - by MattH
    I have been trying to lock a file so that other cloned services cannot access the file. I then read the file, and then move the file when finished. The Move is allowed by using FileShare.Delete. However in later testing, we found that this approach does not work if we are looking at a network share. I appreciate my approach may not have been the best, but my specific question is: Why does the below demo work against the local file, but not against the network file? The more specific you can be the better, as I've found very little information in my searches that indicates network shares behave differently to local disks. string sourceFile = @"C:\TestFile.txt"; string localPath = @"C:\MyLocalFolder\TestFile.txt"; string networkPath = @"\\MyMachine\MyNetworkFolder\TestFile.txt"; File.WriteAllText(sourceFile, "Test data"); if (!File.Exists(localPath)) File.Copy(sourceFile, localPath); foreach (string path in new string[] { localPath, networkPath }) { using (FileStream fsLock = File.Open(path, FileMode.Open, FileAccess.ReadWrite, (FileShare.Read | FileShare.Delete))) { string target = path + ".out"; File.Move(path, target); //This is the point of failure, when working with networkPath if (File.Exists(target)) File.Delete(target); } if (!File.Exists(path)) File.Copy(sourceFile, path); }

    Read the article

  • SQLserver multithreaded locking with TABLOCKX

    - by WilfriedVS
    I have a table "tbluser" with 2 fields: userid = integer (autoincrement) user = nvarchar(100) I have a multithreaded/multi server application that uses this table. I want to accomplish the following: Guarantee that field user is unique in my table Guarantee that combination userid/user is unique in each server's memory I have the following stored procedure: CREATE PROCEDURE uniqueuser @user nvarchar(100) AS BEGIN BEGIN TRAN DECLARE @userID int SET nocount ON SET @userID = (SELECT @userID FROM tbluser WITH (TABLOCKX) WHERE [user] = @user) IF @userID <> '' BEGIN SELECT userID = @userID END ELSE BEGIN INSERT INTO tbluser([user]) VALUES (@user) SELECT userID = SCOPE_IDENTITY() END COMMIT TRAN END Basically the application calls the stored procedure and provides a username as parameter. The stored procedure either gets the userid or insert the user if it is a new user. Am I correct to assume that the table is locked (only one server can insert/query)?

    Read the article

  • MySQL locking problem

    - by teehoo
    I have a simple setup of a set of writers and a set of readers working with a MySQL ISAM table. The writers are only inserting rows while the readers are only checking for new rows. OK, so I know that I don't need a lock in this situation, since I'm not modifying existing rows. However my Writers are accessing one more table that does need a lock. I piece of information seems irrelevant except for the following limitation stated in the MySQL documentation: A session that requires locks must acquire all the locks that it needs in a single LOCK TABLES statement. While the locks thus obtained are held, the session can access only the locked tables. For example, in the following sequence of statements, an error occurs for the attempt to access t2 because it was not locked in the LOCK TABLES statement: So to access the table I want to insert rows into, I NEED to lock it, which is causing me performance problems. Any suggestions of how to get around this?

    Read the article

  • locking database record for editing

    - by sd_dracula
    I have a SQL 2008 DB and an asp.net frontend. I would like to implement a lock when a user is currently editing a record but unsure of which is the best approach. My idea is to have a isLocked column for the records and it gets set to true when a user pulls that record, meaning all other users have read only access until the first user finishes the editing. However, what if the session times out and he/she never saves/updates the record, the record will remain with isLocked = true, meaning others cannot edit it, right? How can I implement some sort of session time out and have isLocked be automatically set to false when the session times out (or after a predefined period) Should this be implemented on the asp.net side or the SQL side?

    Read the article

  • Explain the code: c# locking feature and threads

    - by Mendy
    I used this pattern in a few projects, (this snipped of code is from CodeCampServer), I understand what it does, but I'm really interesting in an explanation about this pattern. Specifically: Why is the double check of _dependenciesRegistered. Why to use lock (Lock){}. Thanks. public class DependencyRegistrarModule : IHttpModule { private static bool _dependenciesRegistered; private static readonly object Lock = new object(); public void Init(HttpApplication context) { context.BeginRequest += context_BeginRequest; } public void Dispose() { } private static void context_BeginRequest(object sender, EventArgs e) { EnsureDependenciesRegistered(); } private static void EnsureDependenciesRegistered() { if (!_dependenciesRegistered) { lock (Lock) { if (!_dependenciesRegistered) { new DependencyRegistrar().ConfigureOnStartup(); _dependenciesRegistered = true; } } } } }

    Read the article

  • Pessimistic locking is not working with Query API

    - by Reddy
    List esns=session.createQuery("from Pool e where e.status=:status "+ "order by uuid asc") .setString("status", "AVAILABLE") .setMaxResults(n) .setLockMode("e", LockMode.PESSIMISTIC_WRITE) .list(); I have the above query written, however it is not generating for update query and simultaneous updates are happening. I am using 3.5.2 version and it has a bug in Criteria API, is the same bug present in query API as well or I am doing something wrong?

    Read the article

  • Strange Locking Behaviour in SQL Server 2005

    - by SQL Learner
    Can anyone please tell me why does the following statement inside a given stored procedure returns repeated results even with locks on the rows used by the first SELECT statement? BEGIN TRANSACTION DECLARE @Temp TABLE ( ID INT ) INSERT INTO @Temp SELECT ID FROM SomeTable WITH (ROWLOCK, UPDLOCK, READPAST) WHERE SomeValue <= 10 INSERT INTO @Temp SELECT ID FROM SomeTable WITH (ROWLOCK, UPDLOCK, READPAST) WHERE SomeValue >= 5 SELECT * FROM @Temp COMMIT TRANSACTION Any values in SomeTable for which SomeValue is between 5 and 10 will be returned twice, even though they were locked in the first SELECT. I thought that locks were in place for the whole transaction, and so I wasn't expecting the query to return repeated results. Why is this happening?

    Read the article

  • Basics of SQL Server 2008 Locking

    Relational databases are designed for multiple simultaneous users, and Microsoft SQL Server is no different. However, supporting multiple users requires some form of concurrency control, which in SQL Server's case means transaction isolation and locking. Read on to learn how SQL Server 2008 implements locking.

    Read the article

  • Basics of SQL Server 2008 Locking

    Relational databases are designed for multiple simultaneous users, and Microsoft SQL Server is no different. However, supporting multiple users requires some form of concurrency control, which in SQL Server's case means transaction isolation and locking. Read on to learn how SQL Server 2008 implements locking.

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Partial upgrade on 12.04, how to stop nagging after locking to a working NVIDIA & xorg

    - by alsk
    How to stop the upgrade manager from offering updates and upgrades that potentially would harm my working 2D and 3D graphics? Finally, I got 12.04 working as it should: with nvidia-173 drivers by downgrading xorg and locking the version: On my 32-bit system on Athlon64, with (Albatron) NVIDIA GeForce FX5700XT, locked (/pinned) to xorg 1:7.6-7ubuntu7, xserver-xorg-core 2:11.1-0obuntu10.07, nvidia-173 173.14.35-0ubuntu0.2? An annoying thing left is that every time the updates are checked, I get warning of partial updates, and ambiguous options of "partial update" and "close". Ambiguous in that sense that if I click close, I will get option to update a few packages, which has been OK, while "partial update" would like to update my kernel to 3.2, alter xorg, remove nvidia-173 etc., and update mesa etc. This is not what I call appropriate, after locking XORG and NVIDIA drivers to working ones. One may say according to package management logic it may be correct, but to me as an user it makes little sense. Last Ubuntu that worked without big mess for me was 10.10, hence I will not put 12.10 to my "production" system, until I can be sure it will not trash the system again. P.S. Is there a recommended way to keep NVIDIA GeForce FX working with 3D on Ubuntu... in future?

    Read the article

  • Using the Coherence ConcurrentMap Interface (Locking API)

    - by jpurdy
    For many developers using Coherence, the first place they look for concurrency control is the com.tangosol.util.ConcurrentMap interface (part of the NamedCache interface). The ConcurrentMap interface includes methods for explicitly locking data. Despite the obvious appeal of a lock-based API, these methods should generally be avoided for a variety of reasons: They are very "chatty" in that they can't be bundled with other operations (such as get and put) and there are no collection-based versions of them. Locks do directly not impact mutating calls (including puts and entry processors), so all code must make explicit lock requests before modifying (or in some cases reading) cache entries. They require coordination of all code that may mutate the objects, including the need to lock at the same level of granularity (there is no built-in lock hierarchy and thus no concept of lock escalation). Even if all code is properly coordinated (or there's only one piece of code), failure during updates that may leave a collection of changes to a set of objects in a partially committed state. There is no concept of a read-only lock. In general, use of locking is highly discouraged for most applications. Instead, the use of entry processors provides a far more efficient approach, at the cost of some additional complexity.

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • This Wed, Reading - Service Broker, Indexing, Normalisation, Sets, RI and Locking, Surrogate Keys

    - by tonyrogerson
    Registration is a must so we know numbers and for security, register here: http://sqlserverfaq.com/events/213/Service-Broker-Intro-Guidance-Indexing-Selection-Usage-Fragmentation-etc-Normalisation-Surrogate-Keys-Locking-considerations.aspx Network, learn, ask a question, meet other folk, get fed - these are all things that happen at user group events. These events are a really great opportunity to socialise in an informal learning experience - if you want your own exposure then come and do a 1 -...(read more)

    Read the article

  • Read Committed isolation level, indexed views and locking behavior

    - by Michael Zilberstein
    From BOL, " Key-Range Locking " article: Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level . The serializable isolation level requires that any query executed during a transaction must obtain the same set of rows every time it is executed during the transaction. A key range lock protects this requirement by preventing other transactions from inserting new rows whose...(read more)

    Read the article

  • Do we need Record Level Locking when we already have Transaction for online ordering? (of concert ti

    - by Jian Lin
    For online ordering of concert seat or airline ticket, do we need Record Level Locking or is Transaction good enough? For concert ticket (say, seat Number 20B), or airline ticket (even with overbooking, the limit is 210, for example), I think the website cannot lock any record or begin transaction when showing the ticket purchase screen. But after the user clicks "Confirm Purchase", then the server should Begin a Transaction, Purchase Seat Number 20B, and try to Commit. If another user already bought Seat 20B in a previous transaction, then it is the "Commit" part that the current transaction will fail? So... we don't need Record Level Locking? Do Transactions always go serialized (one after another), so that's why we can know for sure there is no "race condition"? In what situation is Record Level Locking needed then?

    Read the article

  • Screen sometimes inverts after locking

    - by hackedd
    Sometimes, when I wake up my computer from a screen lock, one of my screens is inverted. It does not always happen (maybe one in ten times) and it is not always the same screen. Re-locking and unlocking the screen a couple of times fixes the colors again. I have a dual monitor setup on an ATI Radeon HD 3450, using two HP screens connected over DVI. I am using the radeon driver, and according to jockey there are no additional drivers available for my system. Any ideas as to why this happens?

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >