Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 13/95 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Refactoring ADO.NET - SqlTransaction vs. TransactionScope

    - by marc_s
    I have "inherited" a little C# method that creates an ADO.NET SqlCommand object and loops over a list of items to be saved to the database (SQL Server 2005). Right now, the traditional SqlConnection/SqlCommand approach is used, and to make sure everything works, the two steps (delete old entries, then insert new ones) are wrapped into an ADO.NET SqlTransaction. using (SqlConnection _con = new SqlConnection(_connectionString)) { using (SqlTransaction _tran = _con.BeginTransaction()) { try { SqlCommand _deleteOld = new SqlCommand(......., _con); _deleteOld.Transaction = _tran; _deleteOld.Parameters.AddWithValue("@ID", 5); _con.Open(); _deleteOld.ExecuteNonQuery(); SqlCommand _insertCmd = new SqlCommand(......, _con); _insertCmd.Transaction = _tran; // add parameters to _insertCmd foreach (Item item in listOfItem) { _insertCmd.ExecuteNonQuery(); } _tran.Commit(); _con.Close(); } catch (Exception ex) { // log exception _tran.Rollback(); throw; } } } Now, I've been reading a lot about the .NET TransactionScope class lately, and I was wondering, what's the preferred approach here? Would I gain anything (readibility, speed, reliability) by switching to using using (TransactionScope _scope = new TransactionScope()) { using (SqlConnection _con = new SqlConnection(_connectionString)) { .... } _scope.Complete(); } What you would prefer, and why? Marc

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • SQL Server Log File Size Management

    - by Rob
    I have all my databases in full recovery and have the log backups happening every 15 minutes so my log files are usually pretty small. question is if there is a nightly operation that causes lots of transactions to happen and causes my log files to grow, should i shrink them back down afterward? Does having a gigantic log file negatively affect the database performance? Disk space isn't an issue at this time.

    Read the article

  • Advised auditing method for MS SQL to track changes made to a specific table by a specific user?

    - by scape
    What is the best method for tracking changes or logging the queries done to a table by a specific user when the person is using Management Studio? I'm using 2008 R2 Express Edition and want to specifically track a single user who logs in through Management studio and runs queries to make changes manually. I want to see what query was run and thus determine what was changed and how. I am not interested in restoring the information. I considered Change Tracking but read that it is not ideal for auditing as well I am unsure how to read the data, then I considered the Bulk-Logging option on the database however I then have to consider handling the log files which may grow huge as the database is used constantly by a web app. I am wondering if there is a more concise method to do what I want?

    Read the article

  • How do I get save (no exclamation point) semantics in an ActiveRecord transaction?

    - by James A. Rosen
    I have two models: Person and Address which I'd like to create in a transaction. That is, I want to try to create the Person and, if that succeeds, create the related Address. I would like to use save semantics (return true or false) rather than save! semantics (raise an ActiveRecord::StatementInvalid or not). This doesn't work because the user.save doesn't trigger a rollback on the transaction: class Person def save_with_address(address_options = {}) transaction do self.save address = Address.build(address_options) address.person = self address.save end end end (Changing the self.save call to an if self.save block around the rest doesn't help, because the Person save still succeeds even when the Address one fails.) And this doesn't work because it raises the ActiveRecord::StatementInvalid exception out of the transaction block without triggering an ActiveRecord::Rollback: class Person def save_with_address(address_options = {}) transaction do save! address = Address.build(address_options) address.person = self address.save! end end end The Rails documentation specifically warns against catching the ActiveRecord::StatementInvalid inside the transaction block. I guess my first question is: why isn't this transaction block... transacting on both saves?

    Read the article

  • How to find long running transactions in Websphere MQ Series?

    - by raistlin
    In a J2EE environment the MQ server log shows the following: Process(954584.5) User(mqm) Program(amqzmuc0) AMQ7469: Transactions rolled back to release log space. .... While increasing the logfile size/space might be a temporary solution, the definitive solution must be to identify the culprit process/queue that causes this long transaction. Is there any solution/tool for this? Note: MQ is used via JMS only

    Read the article

  • I get 2014 Cannot execute queries while other unbuffered queries are active when doing exec with PDO

    - by Itay Moav
    I am doing a PDO::exec command on multiple updates: $MyPdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY,true); $MyPdo->exec("update t1 set f1=1;update t2 set f1=2"); I am doing it inside a transaction, and I keep getting: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. those are the only query/ies

    Read the article

  • Php + mysql transactions examples

    - by Donator
    I really haven't found normal example of php file where mysql transactions are being used. Can you show me simple example of that? And one more question. I've already created a lot of programming and didn't use transaction, maybe I can put any php function or smth to header.php that if one mysql_query fails, then others too? Thank you.

    Read the article

  • MSSQL2008: DTC Transaction - Internal abort

    - by Teutales
    Hi all, I write a small own replication - a trigger which fires an DTC INSERT to another server (one reason for my own "replication": while trigger is running it calculates some data, another: it works from an express version to an express version). When I do the initial insert from the same Host with the windows authentification it works fine. But there is a webserver on another host, which uses the sqlserver login (for testing sa). When this Host do the initial insert I get a Internal abort after the entlisting and creating phase in the DTCTransaction EventClass (Profiler). The magic is: When I first fire it from the same Host with the windows authentification, I can fire it from the webserver and it works fine. But I just have to wait some minutes and it won't work. Where is my error in reasoning... Thanks! Greetz Teutales Here is my initial server script: EXEC master.dbo.sp_addlinkedserver @server = @Servername, @srvproduct=N'SQL Server' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = @Servername, @locallogin = NULL , @useself = N'False', @rmtuser = @Serverlogin, @rmtpassword = @Serverpwd

    Read the article

  • transaction log shipping sql server 2005 to 2008

    - by Andrew Jahn
    I have a reporting setup with SSRS on our sql server 2005 database. Because sql server 2008 is not supported by the main program which populates our database we are stuck with 2005 on our prod database. Unfortunately when I run our weekly check reports the web interface constantly times out because the server cant do the conversion to PDF. I've read that sql server 2008's SSRS is ALOT better with memory management. I was wondering if I can do some kind transact log shipping subscription publication from 2005 to 2008? Am I chasing a dream here.

    Read the article

  • FMDB transaction

    - by user142764
    Hi ! I use FMDB to wrap SQLite in my app. I haven't found any docs about the use of methods begin, beginUpdates, commit, finalize, etc. I face some problems in my apps which i think are caused by the way i use transactions. Here is what i tried : [FMDB beginUpdates] - My insert statement - [FMDB commit] [FMDB finalize] it crashes with this log : Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[FMDatabase<0xd705a0 finalize]: called when collecting not enabled' Could you please give me an example of how you are using transactions, or point me to a doc ? Thanks in advance, Vincent.

    Read the article

  • Rails transaction: save data in multiple models.

    - by smotchkkiss
    my models class Auction belongs_to :item belongs_to :user, :foreign_key => :current_winner_id has_many :auction_bids end class User has_many :auction_bids end class AuctionBid belongs_to :user end current usage An item is displayed on the page, the user enters an amount and clicks bid. Controller code might look something like this: class MyController def bid @ab = AuctionBid.new(params[:auction_bid]) @ab.user = current_user if @ab.save render :json => {:response => 'YAY!'} else render :json => {:response => 'FAIL!'} end end end desired functionality This works great so far! However, I need to ensure a couple other things happen. @ab.auction.bid_count needs to be incremented by one. @ab.user.bid_count needs to be incremented by one @ab.auction.current_winner_id needs to be set to @ab.user_id That is, the User and the Auction associated with the AuctionBid need values updated as well in order for the AuctionBid#save to return true.

    Read the article

  • Transaction issue in java with hibernate - latest entries not pulled from database

    - by Gearóid
    Hi, I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5. My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is: Get users who have activity in the past month Create the relevant billing entries for each user Get the set of billing entries that we've just created Create a Monthly Bill based on these entries Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated. If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing. My code looks like the following: for (User user : usersWithActivities) { createBillingEntriesForUser(user.getId()); userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId()); createXMLBillForUser(user.getId(), userBillingEntries); } The methods called look like the following: @Transactional public void createBillingEntriesForUser(Long id) { UserManager userManager = ManagerFactory.getUserManager(); User user = userManager.getUser(id); List<AccountEvent> events = getLastMonthsAccountEventsForUser(id); BillingEntry entry = new BillingEntry(); if (null != events) { for (AccountEvent event : events) { if (event.getEventType().equals(EventType.ENABLE)) { Calendar cal = Calendar.getInstance(); Date eventDate = event.getTimestamp(); cal.setTime(eventDate); double startDate = cal.get(Calendar.DATE); double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH); double numberOfDaysInUse = numOfDaysInMonth - startDate; double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth; BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST); amount.scale(); entry.setAmount(amount); entry.setUser(user); entry.setTimestamp(eventDate); userManager.saveOrUpdate(entry); } } } } @Transactional public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) { if (log.isDebugEnabled()) log.debug("Getting all the billing entries for last month for user with ID " + id); //String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user"; String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth"; //This parameter will be the start of the last month ie. start of billing cycle SearchParameter firstOfLastMonth = new SearchParameter(); firstOfLastMonth.setTemporalType(TemporalType.DATE); //this parameter holds the start of the CURRENT month - ie. end of billing cycle SearchParameter firstOfCurrentMonth = new SearchParameter(); firstOfCurrentMonth.setTemporalType(TemporalType.DATE); Query query = super.entityManager.createQuery(queryString); query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth()); query.setParameter("firstOfLastMonth", getFirstOfLastMonth()); query.setParameter("id", id); List<BillingEntry> entries = query.getResultList(); return entries; } public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) { BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager(); UserManager userManager = ManagerFactory.getUserManager(); MonthlyBill mb = new MonthlyBill(); User user = userManager.getUser(id); mb.setUser(user); mb.setTimestamp(new Date()); Set<BillingEntry> entries = new HashSet<BillingEntry>(); entries.addAll(billingEntries); String xml = createXmlForMonthlyBill(user, entries); mb.setXmlBill(xml); mb.setBillingEntries(entries); MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb); return bill; } Help with this issue would be greatly appreciated as its been wracking my brain for weeks now! Thanks in advance, Gearoid.

    Read the article

  • Oracle transaction read-consistency ?

    - by trojanwarrior3000
    I have problem understanding read consistency in db(oracle). Suppose I am manger of a bank . A customer has got lock (which I dont know) and is doing some updation. Now after he has got lock I am viewing account info of the same customer and try to do some thing on it.But because of read consistency I will see data as it existed before customer got the lock. So will not that affect inputs I am getting and the decisions that I am gonna make during that period?

    Read the article

  • Mixing Transaction Script pattern with DDD/CQRS

    - by Herman
    Hi all, Here is the situation, in order to support our legacy system, we need to insert to a table whenever a user logs in. This is basically an CRUD operation, so it doesn't really make sense to create repository/entity/command/event for this since this doesn't tie to any business rules at all. The only benefit to create a CQRS command is that this database write can happen asynchronously under that model. Which is a better route to take? Use CQRS, and then call a stored proc. when handling that command? Just call database directly in the controller (I am using asp.net mvc)

    Read the article

  • EJB and JPA and @OneToMany - Transaction too long?

    - by marioErr
    Hello. I'm using EJB and JPA, and when I try to access PhoneNumber objects in phoneNumbers attribute of Contact contact, it sometimes take several minutes for it to actually return data. It just returns no phoneNumbers, not even null, and then, after some time, when i call it again, it magically appears. This is how I access data: for (Contact c : contactFacade.findAll()) { System.out.print(c.getName()+" "+c.getSurname()+" : "); for (PhoneNumber pn : c.getPhoneNumbers()) { System.out.print(pn.getNumber()+" ("+pn.getDescription()+"); "); } } I'm using facade session ejb generated by netbeans (basic CRUD methods). It always prints correct name and surname, phonenumbers and description are only printed after some time (it varies) from creating it via facade. I'm guessing it has something to do with transactions. How to solve this? These are my JPA entities: contact @Entity public class Contact implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; private String surname; @OneToMany(cascade = CascadeType.REMOVE, mappedBy = "contact") private Collection<PhoneNumber> phoneNumbers = new ArrayList<PhoneNumber>(); phonenumber @Entity public class PhoneNumber implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String number; private String description; @ManyToOne() @JoinColumn(name="CONTACT_ID") private Contact contact;

    Read the article

  • GAE transaction exceptions

    - by bach
    Hi, In this example IS the exception being thrown if ANY of the Table elements are being changed by another client OR only if the element that we changed has been changed by another client? Just to verify - the exception is thrown from the commit() isn't it? PersistenceManager pm = PMF.get().getPersistenceManager(); try { pm.currentTransaction().begin(); List<Row> Table = (List<Row>) pm.newQuery(query).execute(); Table.get(0).setReserved(true); // <----- we change only this element pm.currentTransaction().commit(); } catch (JDOCanRetryException ex) { pm.currentTransaction().rollback() // <----- if Table.get(1) was changed by another client do we get to this point??? }

    Read the article

  • How to group a database write and spreadsheet write in single "transaction"

    - by WhyGeeEx
    I have a Java program that writes results to both a DB (SQL Server) and a spreadsheet (POI), and it would be best if neither is written to if there's an error with either. It would be a lot worse if the spreadsheet was produced and then an error happened while saving to the DB, so I'm doing the DB-write first. Even so, I'm wondering if someone knows of a way to guarantee they both succeed or fail as a unit. Thanks!

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >