Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 15/95 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why doesn't access database update?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • db2 jdbc driver does not release table locks

    - by as
    situation: We have a web service running on tomcat accessing DB2 database on AS400, we are using JTOPEN drivers for JNDI connections handled by tomcat. For handling transactions and access to database we are using Spring. For each select system takes JDBC connection from JNDI (i.e. from connection pool), does selection, and in the end it closes ResultSet, Statement and releases Connection in that order. That passes fine, shared lock on table dissappears. When we want to do update the same way as we did with select (exception on ResultSet object, we don't have one in such situation), after releasing Connection to JNDI lock on table stays. If we put maxIdle=0 for number of connections in JNDI configuration, this problem disappears, but this degrades performances, we have cca 100 users online on that service, we need few connections to be alive in pool. What do you suggest?

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • Spring @Transactional Annotation Best Practice

    - by Thomas Einwaller
    We are currently discussing the Best Practice for placing the @Transactional annotations in our code. Do you place the @Transactional in the DAO classes and/or their methods or is it better to annotate the Service classed which are calling using the DAO objects? Or does it make sense to annotate both "layers"?

    Read the article

  • How do I delete in Django? (mysql transactions)

    - by alex
    If you are familiar with Django, you know that they have a Authentication system with User model. Of course, I have many other tables that have a Foreign Key to this User model. If I want to delete this user, how do I architect a script (or through mysql itself) to delete every table that is related to this user? My only worry is that I can do this manually...but if I add a table , but I forget to add that table to my DELETE operation...then I have a row that links to a deleted, non-existing User.

    Read the article

  • SQL Server "Long running transaction" performance counter: why no workee?

    - by Sleepless
    Please explain to me the following observation: I have the following piece of T-SQL code that I run from SSMS: BEGIN TRAN SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) COMMIT TRAN The query takes about twenty seconds to run. I have no other user queries running on the server. Under these circumstances, I would expect the performance counter "MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time" to rise constantly up to a value of 20 and then drop rapidly. Instead, it rises to around 12 within two seconds and then oscillates between 12 and 14 for the duration of the query after which it drops again. According to the MS docs, the counter measures "The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction." But apparently, it doesn't. What gives?

    Read the article

  • Saving child collections with NHibernate

    - by Ben
    Hi, I am in the process or learning NHibernate so bare with me. I have an Order class and a Transaction class. Order has a one to many association with transaction. The transaction table in my database has a not null constraint on the OrderId foreign key. Order class: public class Order { public virtual Guid Id { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual decimal Total { get; set; } public virtual ICollection<Transaction> Transactions { get; set; } public Order() { Transactions = new HashSet<Transaction>(); } } Order Mapping: <class name="Order" table="Orders"> <cache usage="read-write"/> <id name="Id"> <generator class="guid"/> </id> <property name="CreatedOn" type="datetime"/> <property name="Total" type="decimal"/> <set name="Transactions" table="Transactions" lazy="false" inverse="true"> <key column="OrderId"/> <one-to-many class="Transaction"/> </set> Transaction Class: public class Transaction { public virtual Guid Id { get; set; } public virtual DateTime ExecutedOn { get; set; } public virtual bool Success { get; set; } public virtual Order Order { get; set; } } Transaction Mapping: <class name="Transaction" table="Transactions"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="ExecutedOn" type="datetime"/> <property name="Success" type="bool"/> <many-to-one name="Order" class="Order" column="OrderId" not-null="true"/> Really I don't want a bidirectional association. There is no need for my transaction objects to reference their order object directly (I just need to access the transactions of an order). However, I had to add this so that Order.Transactions is persisted to the database: Repository: public void Update(Order entity) { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Update(entity); foreach (var tx in entity.Transactions) { tx.Order = entity; session.SaveOrUpdate(tx); } transaction.Commit(); } } } My problem is that this will then issue an update for every transaction on the order collection (regardless of whether it has changed or not). What I was trying to get around was having to explicitly save the transaction before saving the order and instead just add the transactions to the order and then save the order: public void Can_add_transaction_to_existing_order() { var orderRepo = new OrderRepository(); var order = orderRepo.GetById(new Guid("aa3b5d04-c5c8-4ad9-9b3e-9ce73e488a9f")); Transaction tx = new Transaction(); tx.ExecutedOn = DateTime.Now; tx.Success = true; order.Transactions.Add(tx); orderRepo.Update(order); } Although I have found quite a few articles covering the set up of a one-to-many association, most of these discuss retrieving of data and not persisting back. Many thanks, Ben

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • How many inserts can you have in a sql transaction.

    - by Mav
    I have a task to do that will require me using a transaction to ensure that many inserts will be completed or the entire update rolled back. I am concerned about the amount of data that needs to be inserted in this transaction and whether this will have a negative affect on the server. We are looking at about 10,000 records in table1 and 60,0000 records into table2. Is this safe to do in a single transaction?

    Read the article

  • Is there something like a "long running offline transaction" for NHibernate or any other ORM?

    - by Vilx-
    In essence this is a followup of this question. I'm beginning to feel that I should give up the whole idea, but I'll give it one more shot. What I want is pretty much like a DB transaction. It should track my changes to the DB and then in the end allow me to either commit or rollback them. If I insert an object, I should get it back in my next (appropriate) SELECT query. If I delete it, future SELECT queries should not return it. Etc. But there is one catch - this transaction would be very long running. It would start when the user opened a form (I'm talking about Windows Forms here), and the commit/rollback would be when the user closed it(with OK/Cancel). So it could take anywhere between seconds and days. This requirement rules out a standard DB transaction because that would lock the tables/rows it touched, and other users wouldn't be able to use the system. Also the transaction should not commit ANY changes to the DB until it was really committed. So if one user makes some changes, others don't see them until OK button is hit. This prevents errors in case the computer crashes or is disconnected from the network. I'm quite OK if the solution puts constraints on my model (I'm using MSSQL 2008, btw). I can design the DB/code any way I like. I'm also fine with the idea that a commit could fail because someone already modified one of the objects my transaction touched. Is there anything like this? I looked at NHibernate.Burrow, but I'm not sure that that's the thing I want. Added: It's the very beginning of the project so I'm not tied to NHibernate. I started out with it but I can still change easily.

    Read the article

  • How to make an access database where both users with and without an ID number can make a transaction

    - by louise
    I am trying to create an access 2007 database that allows staff that already have ID numbers to make a transaction and also other guest users who do not have ID number make a transaction. What is the best way todo this in access? A transaction involves taking an item out of inventory. Therefore if one a user (staff or external) has an item out of inventory then no other users can get a hold of that item. Thanks, Any Ideas would be most appreciated!

    Read the article

  • What transaction manager should I use for JBDC template When using JPA ?

    - by Sajid
    I am using standard JPA transaction manager for my JPA transactions. However, now I want to add some JDBC entities which will share the same 'datasource'. How can I make the JDBC operations transactional with spring transaction? Do I need to swith to JTA transaction managers? Is it possible to use both JPA & JDBC transactional service with same datasource? Even better, is it possible to mix these two transactions?

    Read the article

  • Would this prevent the row from being read during the transaction?

    - by acidzombie24
    I remember an example where reads in a transaction then writing back the data is not safe because another transaction may read/write to it in the time between. So i would like to check the date and prevent the row from being modified or read until my transaction is finish. Would this do the trick? and are there any sql variants that this will not work on? update tbl set id=id where date>expire_date and id=@id Note: dateexpire_date happens to be my condition. It could be anything. Would this prevent other transaction from reading the row until i commit or rollback?

    Read the article

  • How to config a default global EJB transaction attribute in JBoss Server?

    - by seven
    my project need to migrate from oc4j to jboss. But it seems that default EJB transaction attribute is different between them. For OC4J: If you do not specify any transaction attributes for an EJB method then OC4J uses default transaction attributes. OC4J by default uses Required for CMP 2.0, NotSupported for MDBs and Supports for all other types of EJBs. (refer to official doc) For JBoss: default for all types EJB is Required. (Maybe , refer to un-official site) To migrate my project within less effort, how to config a default global EJB transaction attribute, e.g. Supports, in JBoss Server?

    Read the article

  • How to avoid StaleObjectStateException when transaction updates thousands of entities?

    - by ThinkFloyd
    We are using Hibernate 3.6.0.Final with JPA 2 and Spring 3.0.5 for a large scale enterprise application running on tomcat 7 and MySQL 5.5. Most of the transactions in application, lives for less than a second and update 5-10 entities but in some use cases we need to update more than 10-20K entities in single transaction, which takes few minutes and hence more than 70% of times such transaction fails with StaleObjectStateException because some of those entities got updated by some other transaction. We generally maintain version column in all tables and in case of StaleObjectStateException we generally retry but since these longs transactions are anyways very long so if we keep on retrying then also I am not very sure that we'll be able to escape StaleObjectStateException. Also lot of activities keep updating these entities in busy hours so we cannot go with pessimistic approach because it can potentially halt many activities in system. Please suggest how to fix such long transaction issue because we cannot spawn thousands of independent and small transactions because we cannot afford messed up data in case of some failed & some successful transactions.

    Read the article

  • How to close the connection after set Transation to Nothing or Commit/Rollback

    - by user1957271
    I develop the DAL class for db operation Public Sub StartTransaction() Dim objConnection As SqlConnection = EstablishConnection() objConnection.Open() Me.Transaction = objConnection.BeginTransaction() End Sub Public Sub CommitTransaction() Me.Transaction.Commit() End Sub Public Sub RollBackTransaction() Me.Transaction.Rollback() End Sub after start the transaction when we commit or rollback and set transaction object to nothing it dont close the connection attach with this transaction how I close the Connection attach to this transaction???

    Read the article

  • How to deal with Warning : "Uncommittable transaction is detected at the end of the batch. The trans

    - by VishnuTiwariBlog
    Hi, If you are integrating with SQL Server and dealing with batch messages, you may encounter this problem. And this is evitable. The reason is the contention of resources. If your batch contains four messages and all the four messages have to be updated to SQL Server and then at the same time four process will contend for SQL server table and resources and the obvious result will be, few of your transaction will be left uncomitted and if you are not handling dehydration [not modifying the default property of the Dehydration] then your orchestration will dehydrate and will go for retry. If retry is set for every five minutes then after five minutes Port will send the message to the database. Reason for writing this post was as I did not want to see so many DEHYDRATED messages. And this was happening as Host Throttling was not set. Thus as soon as the BizTalk Process finds that SQL resources are unavailable it will go and dehydrate that process and process will go for retry. The contension of resources is unavoidable though we can fine tune the Dehydration setting. If you increase the time that an orchestration can be blocked at a subscription before being dehydrated, possibly you will give more time BizTalk Engine to handle to SQL resource availability. At least I solve the problem by fine tuning the Dehydration properties. Below is the section of config info which you need to add to the BTSNTsvc.exe.config.   <?xml version="1.0" ?> <configuration>        <configSections>               <section name="xlangs" type="Microsoft.XLANGs.BizTalk.CrossProcess.XmlSerializationConfigurationSectionHandler, Microsoft.XLANGs.BizTalk.CrossProcess" />        </configSections>        <runtime>               <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">                      <probing privatePath="BizTalk Assemblies;Developer Tools;Tracking" />               </assemblyBinding>        </runtime>        <xlangs>               <Configuration>                      <Dehydration MaxThreshold="1800" MinThreshold="1" ConstantThreshold="-1">                             <VirtualMemoryThrottlingCriteria OptimalUsage="900" MaximalUsage="1300" IsActive="true" />                             <PrivateMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="true" />                             <PhysicalMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="false" />                      </Dehydration>               </Configuration>        </xlangs> </configuration>

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • How Can I Find What's Causing My Transaction to Get Promoted?

    - by Damian Powell
    I have web site which serves web services (a mixture of .asmx and WCF) which is mostly using LINQ to SQL and System.Transactions. Occaisionally we see the transaction get promoted to a distributed transaction which causes problems because our web servers are isolated from our databases in such a way that it is not possible for us to use MSDTC. I have configured tracing for System.Transactions by adding the following to my web.config: <system.diagnostics> <sources> <source name="System.Transactions" switchValue="Information"> <listeners> <add name="tx" type="System.Diagnostics.XmlWriterTraceListener" initializeData="tx.log" /> </listeners> </source> </sources> </system.diagnostics> It's very interesting and shows me when the transaction is promoted, but I find that it doesn't really help be discover why. Is there an equivalent tracing mechanism for ADO.NET that will show me when connections are created, including the variables that affect pooling (user, cnn string, transaction scope)?

    Read the article

  • Wireless suddenly dropping with a Ralink RT2870

    - by cwwk
    I have a Linksys WUSB600N v1 Dual-Band Wireless-N Network Adapter Ralink RT2870 USB dongle that worked flawlessly in 11.10. Since upgrading, I can't keep a connection for more than five minutes. The wild world of Google was unable to provide a solution, and I would rather not downgrade although that remains a possibility. Results of syslog: slack@slack:~$ tail /var/log/syslog Apr 26 20:26:10 slack AptDaemon: INFO: Initializing daemon Apr 26 20:26:10 slack AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer Apr 26 20:26:10 slack dbus[972]: [system] Successfully activated service 'org.freedesktop.PackageKit' Apr 26 20:26:10 slack AptDaemon.PackageKit: INFO: Initializing PackageKit transaction Apr 26 20:26:10 slack AptDaemon.Worker: INFO: Simulating trans: /org/debian/apt/transaction/aaed4e38eb3c41ad86d2bab6ca03ee7c Apr 26 20:26:10 slack AptDaemon.Worker: INFO: Processing transaction /org/debian/apt/transaction/aaed4e38eb3c41ad86d2bab6ca03ee7c Apr 26 20:26:12 slack dbus[972]: [system] Activating service name='com.ubuntu.SystemService' (using servicehelper) Apr 26 20:26:12 slack dbus[972]: [system] Successfully activated service 'com.ubuntu.SystemService' Apr 26 20:30:26 slack AptDaemon.PackageKit: INFO: Get updates() Apr 26 20:30:27 slack AptDaemon.Worker: INFO: Finished transaction /org/debian/apt/transaction/aaed4e38eb3c41ad86d2bab6ca03ee7c Any suggestions?

    Read the article

  • where to enlist transaction with parent child delete (repository or bll)?

    - by Caroline Showden
    My app uses a business layer which calls a repository which uses linq to sql. I have an Item class that has an enum type property and an ItemDetail property. I need to implement a delete method that: (1) always delete the Item (2) if the item.type is XYZ and the ItemDetail is not null, delete the ItemDetail as well. My question is where should this logic be housed? If I have it in my business logic which I would prefer, this involves two separate repository calls, each of which uses a separate datacontext. I would have to wrap both calls is a System.Transaction which (in sql 2005) get promoted to a distributed transaction which is not ideal. I can move it all to a single repository call and the transaction will be handled implicitly by the datacontext but feel that this is really business logic so does not belong in the repository. Thoughts? Carrie

    Read the article

  • SAP Shortcut file - How to redirect to specific transaction screen in SAP?

    - by Kiru
    Problem : How to redirect the user to a specific executed transaction screen in SAP? Generated the SAP shortcut and able to redirect the user to specific transaction screen. It is also possible to prefill the required input parameters. The corresponding line in the shortcut is- Command=AB12 RIWO00-input1=200001212; where AB12 is the trasaction, and input1 is the input parameter. This will open that SAP screen, with AB12 transaction and the input parameter would be filled with values. But this mandates the user to clicks on enter explicitly/click on execute button explicitly after opening through the shortcut file. Is it possible to include that enter also in the shortcut file? Thank you :)

    Read the article

  • MySQL Simple query gives "Query was empty". Transaction help needed I think.

    - by user129609
    Hi, I'm trying to do a simple transaction in MySQL delimiter go start transaction; BEGIN DECLARE EXIT HANDLER FOR SQLEXCEPTION, SQLWARNING, NOT FOUND ROLLBACK; INSERT INTO jext_categories (Name) VALUES ('asdfas'); INSERT INTO jext_categories (Name) VALUES ('asdfas2'); END; commit; SELECT * FROM jext_categories; go delimiter ; but I keep getting an error saying query was empty. Could someone please tell me what I am doing wrong, and also, what is the proper format for doing a transaction in MySQL? Thanks!

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >