Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 5/95 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is there a theory for "transactional" sequences of failing and no-fail actions?

    - by Ross Bencina
    My question is about writing transaction-like functions that execute sequences of actions, some of which may fail. It is related to the general C++ principle "destructors can't throw," no-fail property, and maybe also with multi-phase transactions or exception safety. However, I'm thinking about it in language-neutral terms. My concern is with correctly designing error handling in C++ functions that must be reliable. I would like to know what the concepts below are called so that I can learn more about them. I'm sorry that I can't ask the question more directly. Since I don't know this area I have provided an example to explain my question. The question is at the end. Here goes: Consider a sequence of steps or actions executed sequentially, where actions belong to one of two classes: those that always succeed, and those that may fail. In the examples below: S stands for an action that always succeeds (called "no-fail" in some settings). F stands for an action that may fail (for example, it might fail to allocate memory or do I/O that could fail). Consider a sequences of actions (executed sequentially from left to right): S->S->S->S Since each action in the sequence above succeeds, the whole sequence succeeds. On the other hand, the following sequence may fail because the last action may fail: S->S->S->F So, claim: a sequence has the no-fail (S) property if and only if all of its actions are no-fail. Now, I'm interested in action sequences that form "atomic transactions", with "failure atomicity," i.e. where either the whole sequence completes successfully, or there is no effect. I.e. if some action fails, the earlier ones must be rolled back. This requires that any successfully executed actions prior to a failing action must always be able to be rolled back. Consider the sequence: S->S->S->F S<-S<-S In the example above, the first row is the forward path of the transaction, and the second row are inverse actions (executed from right to left) that can be used to roll back if the final top row actions fails. It seems to me that for a transaction to support failure atomicity, the following invariant must hold: Claim: To support failure atomicity (either completion or complete roll-back on failure) all actions preceding the latest failable (F) action on the forward path (marked * in the example below) must have no-fail (S) inverses. The following is an example of a sequence that supports failure atomicity: * S->F->F->F S<-S<-S Further, if we want the transaction to be able to attempt cancellation mid-way through, but still guarantee either full completion or full rollback then we need the following property: Claim: To support failure atomicity and cancellation mid-way through execution, in the face of errors in the inverse (cancellation) path, all actions following the earliest failable (F) inverse on the reverse path (marked *) must be no-fail (S). F->F->F->S->S S<-S<-F<-F * I believe that these two conditions guarantee that an abortable/cancelable transaction will never get "stuck". My questions are: What is the study and theory of these properties called? are my claims correct? and what else is there to know? UPDATE 1: Updated terminology: what I previously called "robustness" is called atomicity in the database literature. UPDATE 2: Added explicit reference to failure atomicity, which seems to be a thing.

    Read the article

  • How to manage transaction for database and file system in jee environment?

    - by Michael Lu
    I store file’s attributes (size, update time…) in database. So the problem is how to manage transaction for database and file. In jee environment, JTA is just able to manage database transaction. In case, updating database is successful but file operation fails, should I write file-rollback method for this? Moreover, file operation in EJB container violates EJB spec. What’s your opinion? Thank!

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • SQL SERVER – Rollback TRUNCATE Command in Transaction

    - by pinaldave
    This is very common concept that truncate can not be rolled back. I always hear conversation between developer if truncate can be rolled back or not. If you use TRANSACTIONS in your code, TRUNCATE can be rolled back. If there is no transaction is used and TRUNCATE operation is committed, it can not be retrieved from [...]

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Free eBook - Control Your Transaction Log so it Doesn't Control You

    Download your free copy of SQL Server Transaction Log Management and see why understanding how log files work can make all the difference in a crisis. Want to work faster with SQL Server?If you want to work faster try out the SQL Toolbelt. "The SQL Toolbelt provides tools that database developers as well as DBAs should not live without." William Van Orden. Download the SQL Toolbelt here.

    Read the article

  • Transaction Replication Publisher failover to Mirror

    Transaction Replication Publisher failover/failback to mirror standby with automatic redirection of the subscriber and client application. Does your database ever get out of sync?SQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • Optimizing Transaction Log Throughput

    As a DBA, it is vital to manage transaction log growth explicitly, rather than let SQL Server auto-growth events "manage" it for you. If you undersize the log, and then let SQL Server auto-grow it in small increments, you'll end up with a very fragmented log. This article demonstrates how this can have a significant impact on the performance of any SQL Server operations that need to read the log.

    Read the article

  • DB2 Transaction log is full. How to flush / clear it?

    - by Mestika
    Hi, I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment. My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log. I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped. Please, I hope someone has an answer to solve this frustrating problem Thanks - Mestika

    Read the article

  • How to read changed values with native query during one transaction? (Spring and JPA)

    - by knarf1983
    We have container transaction with Spring and JPA (Hibernate). I need to make an update on a table to "flag" some rows via native statements. Then we insert some rows via EntityManager from JPATemplate to this table. After that, we need to calculate changes in the table via native statement (with Oracle's union and minus, complex groups...) I see that changes from step 1 and 2 are not commited and thats why the statement from 3 fails. I already tried with transaction propagation REQUIRES_NEW, EntityManager.flush... Didn't work. 1) update SOMETABLE acolumn = somevalue (native) 2) persist some values into SOMETABLE (via entity manager) 3) select values from SOMETABLE Is there a possibility to read the changes from step 1 and 2 in step 3?

    Read the article

  • Distributed Transaction Framework across webservices

    - by John Petrak
    I am designing a new system that has one central web service and several site web services which are spread across the country and some overseas. It has some data that must be identical on all sites. So my plan is to maintain that data in the central web service and then "sync" the data to sites. This includes inserts, edits and deletes. I see a problem when deleting, if one site has used the record, then I need to undo the delete that has happened on the other servers. This lead me to idea that I need some sort of transaction system that can work across different web servers. Before I design one from scratch, I would like to know if anyone has come across this sort of problem and if there are any frame works or even design patterns that might aid me?

    Read the article

  • Les dessous de la transaction SAP-Sybase, quelles sont les véritables motivations de ce rachat

    Mise à jour du 18.05.2010 par Katleen Les dessous de la transaction SAP-Sybase, quelles sont les véritables motivations de ce rachat L'annonce la semaine dernière du rachat de Sybase par SAP attise la curiosité des analystes financiers du monde entier. Beaucoup veulent comprendre pourquoi 5.8 milliards de dollars ont été mis sur la table dans cette affaire. Economiquement parlant, il s'agit d'une bonne nouvelle et d'un signe de reprise des activités pour SAP, après le recul des revenus de 8% que le groupe avait subi en 2009. Concernant Sybase, les derniers chiffres trimestriels montrent une hausse de 25% du chiffre d'affaires sur les licences de bases de données ; et de 23% pour celu...

    Read the article

  • Set secondary receiver in PayPal Chained Payment after the initial transaction

    - by CJxD
    I'm running a service whereby customers seek the services of 'freelancers' through our web platform. The customer will make a 'bid' which is immediately taken from their accounts as security. Once the job is completed, the customer marks it as accepted and the bid gets distributed to the freelancer(s) as a reward. After initially storing these rewards in the accounts of the freelancers and relying on MassPay to sort out paying them later, I realised that your business needs to be turning over at least £5000/month before MassPay is switched on. Instead, I was referred to Delayed Chained Payments in PayPal's Adaptive Payments API. This allows the customer to pay the primary receiver (my business) before the payment is later triggered to be sent to the secondary receivers (the freelancers). However, at the time that the customer initiates this transaction, you must understand that nobody yet knows who will receive the reward. So, before I program this whole Adaptive Payments system, is it even possible to change or add the secondary receivers after the customer has paid? If not, what can I do?

    Read the article

  • Transaction Log filling up on SQL database set to Simple

    - by Will
    We have a database on a SQL 2005 server that is set to Simple transaction mode. The logging is set to 1 MB and is set to grow by 10% when it needs to. We keep running into an issue where the transaction log fills up and we need to shrink it. What could cause the transaction log to fill up when its set to Simple and unrestricted growth is allowed?

    Read the article

  • Error while taking Transaction log backup

    - by Divya Kapoor
    Hello, I have sceduled a Transaction log back up schedule. But the backup is not happening. The error in the logs is this: Transaction Log Backup.Subplan_1,Error,0,ARCOTDB1\ARCOT_DB_INST1,Transaction Log Backup.Subplan_1,(Job outcome),,The job failed. Unable to determine if the owner (ARCOT-DB1\Superuser) of job Transaction Log Backup.Subplan_1 has server access (reason: Could not obtain information about Windows NT group /user 'ARCOT-DB1\Superuser'<c/> error code 0x534. [SQLSTATE 42000] (Error 15404)) Please help!

    Read the article

  • Spring transaction management breaks hibernate cascade

    - by TimmyJ
    I'm having a problem where the addition of spring's transaction management to an application causes Hibernate to throw the following error: org.hibernate.HibernateException: A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance: org.fstrf.masterpk.domain.ReportCriteriaBean.treatmentArms org.hibernate.engine.Collections.processDereferencedCollection(Collections.java:96) org.hibernate.engine.Collections.processUnreachableCollection(Collections.java:39) org.hibernate.event.def.AbstractFlushingEventListener.flushCollections(AbstractFlushingEventListener.java:218) org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:77) org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:26) org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) org.springframework.orm.hibernate3.SpringSessionSynchronization.beforeCommit(SpringSessionSynchronization.java:135) org.springframework.transaction.support.TransactionSynchronizationUtils.triggerBeforeCommit(TransactionSynchronizationUtils.java:72) org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerBeforeCommit(AbstractPlatformTransactionManager.java:905) org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:715) org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:701) org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:321) org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:116) org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) $Proxy92.saveNewReportCriteria(Unknown Source) org.fstrf.masterpk.domain.logic.MasterPkFacade.saveNewReportCriteria(MasterPkFacade.java:134) org.fstrf.masterpk.controllers.ReportCriteriaController.setupReportType(ReportCriteriaController.java:302) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) java.lang.reflect.Method.invoke(Method.java:585) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java:413) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:134) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:310) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:297) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:809) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511) javax.servlet.http.HttpServlet.service(HttpServlet.java:710) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) I'm using Spring 2.5 and annotations to implement this management. Here is the class containing the saveNewReportCriteria method (which, as can be seen by the stack trace, is causing the error) @Transactional( propagation = Propagation.REQUIRED, isolation = Isolation.DEFAULT, readOnly = false) public class HibernateReportCriteriaDao implements ReportCriteriaDao{ private HibernateTemplate hibernateTemplate; public Integer saveNewReportCriteria(ReportCriteriaBean reportCriteria) { hibernateTemplate.save(reportCriteria); List<Integer> maxIdList = hibernateTemplate.find("SELECT max(id) from ReportCriteriaBean"); logger.info("ID of newly saved list is: " + maxIdList.get(0)); return maxIdList.get(0); } public void setHibernateTemplate(HibernateTemplate hibernateTemplate) { this.hibernateTemplate = hibernateTemplate; } } Then I added the following sections to my configuration files to tell spring that I am using annotation driven transaction management: <bean id="actgDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/actg" /> <property name="resourceRef" value="true" /> </bean> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="actgDataSource" /> </bean> <tx:annotation-driven/> I'm pretty sure that the de-referencing error is being caused due to the proxy class that Spring AOP creates and uses in order to handle transaction management, but I have no idea how I'd go about fixing it.

    Read the article

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • Setup database for Unit tests with Spring, Hibernate and Spring Transaction Support

    - by Michael Bulla
    I want to test integration of dao-layer with my service-layer in a unit-test. So I need to setup some data in my database (hsql). For this setup I need an own transaction at the begining of my testcase to ensure that all my setup is really commited to database before starting my testcase. So here's what I want to achieve: // NotTranactional public void doTest { // transaction begins setup database // commit transaction service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } Here is my not working code: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"/asynchUnit.xml"}) @DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD) public class ReceiptServiceTest implements ApplicationContextAware { @Autowired(required=true) private UserHome userHome; private ApplicationContext context; @Before @Transactional(propagation=Propagation.REQUIRED) public void init() throws Exception { User user = InitialCreator.createUser(); userHome.persist(user); } @Test public void testDoSomething() { ... } } Leading to this exception: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:687) at de.diandan.asynch.modell.GenericHome.getSession(GenericHome.java:40) at de.diandan.asynch.modell.GenericHome.persist(GenericHome.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:318) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:196) at $Proxy28.persist(Unknown Source) at de.diandan.asynch.service.ReceiptServiceTest.init(ReceiptServiceTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) I dont know whats the right way to get the transaction around setup database. What I tried: @Before @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } - Spring seems not to start transaction in @Before-annotated methods. Beyond that, thats not what I really want, cause there are a lot merhods in my testclass which needs a slightly differnt setup, so I need several of that init-methods. @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } public void doTest { init(); service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- init seems not to get started in transaction What I dont want to do: public void doTest { // doing my own transaction-handling setup database // doing my own transaction-handling service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- start mixing springs transaction-handling and my own seems to get pain in the ass. @Transactional(propagation=Propagation.REQUIRED) public void doTest { setup database service.doStuff() } -- I want to test as real as possible situation, so my service should start with a clean session and no transaction opened So whats the right way to setup database for my testcase?

    Read the article

  • Access DB Transaction on insert or updating

    - by Raju Gujarati
    I am going to implement the database access layer of the Window application using C#. The database (.accdb) is located to the project files. When it comes to two notebooks (clients) connecting to one access database through switches, it throws DBConcurrency Exception Error. My target is to check the timestamp of the sql executed first and then run the sql . Would you please provide me some guidelines to achieve this ? The below is my code protected void btnTransaction_Click(object sender, EventArgs e) { string custID = txtID.Text; string CompName = txtCompany.Text; string contact = txtContact.Text; string city = txtCity.Text; string connString = ConfigurationManager.ConnectionStrings["CustomersDatabase"].ConnectionString; OleDbConnection connection = new OleDbConnection(connString); connection.Open(); OleDbCommand command = new OleDbCommand(); command.Connection = connection; OleDbTransaction transaction = connection.BeginTransaction(); command.Transaction = transaction; try { command.CommandText = "INSERT INTO Customers(CustomerID, CompanyName, ContactName, City, Country) VALUES(@CustomerID, @CompanyName, @ContactName, @City, @Country)"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID", custID); command.Parameters.AddWithValue("@CompanyName", CompName); command.Parameters.AddWithValue("@ContactName", contact); command.Parameters.AddWithValue("@City", city); command.ExecuteNonQuery(); command.CommandText = "UPDATE Customers SET ContactName = @ContactName2 WHERE CustomerID = @CustomerID2"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID2", custIDUpdate); command.Parameters.AddWithValue("@ContactName2", contactUpdate); command.ExecuteNonQuery(); adapter.Fill(table); GridView1.DataSource = table; GridView1.DataBind(); transaction.Commit(); lblMessage.Text = "Transaction successfully completed"; } catch (Exception ex) { transaction.Rollback(); lblMessage.Text = "Transaction is not completed"; } finally { connection.Close(); } }

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • INSERT and transaction serialization in PostreSQL

    - by Alexander
    I have a question. Transaction isolation level is set to serializable. When the one user opens a transaction and INSERTs or UPDATEs data in "table1" and then another user opens a transaction and tries to INSERT data to the same table, does the second user need to wait 'til the first user commits the transaction?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >