Search Results

Search found 33569 results on 1343 pages for 'sql backup and restore'.

Page 431/1343 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • 24 Hours of PASS – first reflections

    - by Rob Farley
    A few days after the end of 24HOP, I find myself reflecting on it. I’m still waiting on most of the information. I want to be able to discover things like where the countries represented on each of the sessions, and things like that. So far, I have the feedback scores and the numbers of attendees. The data was provided in a PDF, so while I wait for it to appear in a more flexible format, I’ve pushed the 24 attendee numbers into Excel. This chart shows the numbers by time. Remember that we started at midnight GMT, which was 10:30am in my part of the world and 8pm in New York. It’s probably no surprise that numbers drooped a bit at the start, stayed comparatively low, and then grew as the larger populations of the English-speaking world woke up. I remember last time 24HOP ran for 24 hours straight, there were quite a few sessions with less than 100 attendees. None this time though. We got close, but even when it was 4am in New York, 8am in London and 7pm in Sydney (which would have to be the worst slot for attracting people), we still had over 100 people tuning in. As expected numbers grew as the UK woke up, and even more so as the US did, with numbers peaking at 755 for the “3pm in New York” session on SQL Server Data Tools. Kendra Little almost reached those numbers too, and certainly contributed the biggest ‘spike’ on the chart with her session five hours earlier. Of all the sessions, Kendra had the highest proportion of ‘Excellent’s for the “Overall Evaluation of the session” question, and those of you who saw her probably won’t be surprised by that. Kendra had one of the best ranked sessions from the 24HOP event this time last year (narrowly missing out on being top 3), and she has produced a lot of good video content since then. The reports indicate that there were nearly 8.5 thousand attendees across the 24 sessions, averaging over 350 at each one. I’m looking forward to seeing how many different people that was, although I do know that Wil Sisney managed to attend every single one (if you did too, please let me know). Wil even moderated one of the sessions, which made his feat even greater. Thanks Wil. I also want to send massive thanks to Dave Dustin. Dave probably would have attended all of the sessions, if it weren’t for a power outage that forced him to take a break. He was also a moderator, and it was during this session that he earned special praise. Part way into the session he was moderating, the speaker lost connectivity and couldn’t get back for about fifteen minutes. That’s an incredibly long time when you’re in a live presentation. There were over 200 people tuned in at the time, and I’m sure Dave was as stressed as I was to have a speaker disappear. I started chasing down a phone number for the speaker, while Dave spoke to the audience. And he did brilliantly. He started answering questions, and kept doing that until the speaker came back. Bear in mind that Dave hadn’t expected to give a presentation on that topic (or any other), and was simply drawing on his SQL expertise to get him through. Also consider that this was between midnight at 1am in Dave’s part of the world (Auckland, NZ). I would’ve been expecting just to welcome people, monitor questions, probably read some out, and in general, help make things run smoothly. He went far beyond the call of duty, and if I had a medal to give him, he’d definitely be getting one. On the whole, I think this 24HOP was a success. We tried a different platform, and I think for the most part it was a popular move. We didn’t ask the question “Was this better than LiveMeeting?”, but we did get a number of people telling us that they thought the platform was very good. Some people have told me I get a chance to put my feet up now that this is over. As I’m also co-ordinating a tour of SQLSaturday events across the Australia/New Zealand region, I don’t quite get to take that much of a break (plus, there’s the little thing of squeezing in seven SQL 2012 exams over the next 2.5 weeks). But I am pleased to be reflecting on this event rather than anticipating it. There were a number of factors that could have gone badly, but on the whole I’m pleased about how it went. A massive thanks to everyone involved. If you’re reading this and thinking you wish you could’ve tuned in more, don’t worry – they were all recorded and you’ll be able to watch them on demand very soon. But as well as that, PASS has a stream of content produced by the Virtual Chapters, so you can keep learning from the comfort of your desk all year round. More info on them at sqlpass.org, of course.

    Read the article

  • gzip: stdout: File too large when running customized backup script

    - by Roland
    I've create a plain and siple backup script that only backs up certain files and folders. tar -zcf $DIRECTORY/var.www.tar.gz /var/www tar -zcf $DIRECTORY/development.tar.gz /development tar -zcf $DIRECTORY/home.tar.gz /home Now this script runs for about 30mins then gives me the following error gzip: stdout: File too large Any other solutions that I can use to backup my files using shell scripting or a way to solve this error? I'm grateful for any help.

    Read the article

  • TSQL syntax to restore .bak to new db

    - by justSteve
    I need to automate the creation of a duplicate db from the .bak of my production db. I've done the operation plenty of times via the GUI but when executing from the commandline I'm a little confused by the various switches, in particular, the filenames and being sure ownership is correctly replicated. Just looking for the TSQL syntax for RESTORE that accomplishes that. thx

    Read the article

  • readObject() vs. readResolve() to restore transient fields

    - by Joonas Pulakka
    According to Serializable javadoc, readResolve() is intended for replacing an object read from the stream. But is it OK to use it for restoring transient fields, like so: private Object readResolve() { transientField = something; return this; } as opposed to using readObject(): private void readObject(ObjectInputStream s) { s.defaultReadObject(); transientField = something; } Is there any reason to choose one over other, when used to just restore transient fields?

    Read the article

  • Restore NewForm.aspx file

    - by user117701
    How do i restore a deleted NewForm.aspx file? I dont want it back through the recycle bin, since i made a mess of it, i just want to recreate the original file. Sharepoint 2003 was able to do this.

    Read the article

  • Copying MYSQL backup to another server

    - by Yeti
    I'm new to SSH. How to copy a .gz file from one server to another using SSH? I'm using cron to backup mysql databases and want to also automate the process of copying the .gz files a different web host. Any information on the limit of file size that can be copied would also be great. The backup file size range from 100 MB to few GB.

    Read the article

  • what is "removing backup files" installation step

    - by mfeingold
    In many windows installers the last step is called "removing backup files". I understand that to provide transactional integrity of the install process some "backup files" could've been created and have to be cleaned up. What I do not understand is why on many occasions this step takes considerably longer than the rest of the installation. Any idea why?

    Read the article

  • create backup file descriptor?

    - by BobTurbo
    stdinBackup = 4; dup2(0, stdinBackup); Currently I am doing the above to 'backup' stdin so that it can be restored from backup later after it has been redirected somewhere else. I have a feeling that I am doing a lot wrong? (eg arbitrarily assigning 4 is surely not right). Anyone point me in the right direction?

    Read the article

  • [SQLServer JDBC Driver][SQLServer]Could not find stored procedure 'master..xp_jdbc_open2'.

    - by Vijaya Moderator -Oracle
    When connecting to MS SQL Server Database via Weblogic Datasource and using XA jdbc driver, the following error is thrown. <Jun 3, 2014 5:16:49 AM PDT> <Error> <Console> <BEA-240003> <Console encountered the following error java.sql.SQLException: [FMWGEN][SQLServer JDBC Driver][SQLServer]Could not find stored procedure 'master..xp_jdbc_open2'. at weblogic.jdbc.sqlserverbase.ddb_.b(Unknown Source)at weblogic.jdbc.sqlserverbase.ddb_.a(Unknown Source)at weblogic.jdbc.sqlserverbase.ddb9.b(Unknown Source)at weblogic.jdbc.sqlserverbase.ddb9.a(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddr.v(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddq.a(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)at weblogic.jdbc.sqlserver.ddj.m(Unknown Source)at weblogic.jdbc.sqlserverbase.ddel.e(Unknown Source)at weblogic.jdbc.sqlserverbase.ddel.a(Unknown Source)  The cause behind the issue is that  the MS SQL Server was not installed with the Stored procedures to enable JTA/XA Solution To connect to SQL Server via XA Driver from WLS Datasource you need to install Stored Procedures for JTATo use JDBC distributed transactions through JTA, your system administrator should use the following procedure to install Microsoft SQL Server JDBC XA procedures. This procedure must be repeated for each MS SQL Server installation that will be involved in a distributed transaction.To install stored procedures for JTA:1. Copy the appropriate sqljdbc.dll and instjdbc.sql files from the WL_HOME\server\lib directory to the SQL_Server_Root/bin directory of the MS SQL Server database server, where WL_HOME is the directory in which WebLogic server is installed, typically c:\Oracle\Middleware\wlserver_10.x.  Note:  If you are installing stored procedures on a database server with multiple Microsoft SQL Server instances, each running SQL Server instance must be able to locate the sqljdbc.dll file.Therefore the sqljdbc.dll file needs to be anywhere on the global PATH or on the application-specific path. For the application-specific path, place the sqljdbc.dll file into the :\Program Files\Microsoft SQL Server\MSSQL$\Binn directory for each instance. 2. From the database server, use the ISQL utility to run the instjdbc.sql script. As a precaution, have your system administrator back up the master database before running instjdbc.sql. At a command prompt, use the following syntax to run instjdbc.sql:  ISQL -Usa -Psa_password -Sserver_name -ilocation\instjdbc.sql  where:  sa_password is the password of the system administrator.  server_name is the name of the server on which SQL Server resides.  location is the full path to instjdbc.sql. (You copied this script to the SQL_Server_Root/bin directory in step 1.)  The instjdbc.sql script generates many messages. In general, these messages can be ignored; however, the system administrator should scan the output for any messages that may indicate an execution error. The last message should indicate that instjdbc.sql ran successfully. The script fails when there is insufficient space available in the master database to store the JDBC XA procedures or to log changes to existing procedures.

    Read the article

  • Recovering/Rebuilding MySQL .FRM, .MY* files from IBDATA1

    - by Synetech
    I recently had an incident in which several MySQL files were wiped out (mostly from WordPress, but also a few of MySQL’s own files). The IBDATA1 file is unaffected, but several .frm are gone as are a few .myi and .myd files. So now I need to find out if there is a way to rebuild the missing files from IBDATA1. I tried Googling it, assuming that such an issue has come up before, and indeed there were numerous search results (including this question), but all of the ones I looked at were the opposite, about recovering from .frm and .my* files or somehow required these files. Is there a way to rebuild these files? I know I have a relatively recent backup (a .SQL file) if there isn’t, but I’m hoping that these are the kind of files that are rebuilt if missing or outdated.

    Read the article

  • Two questions about restoring Thunderbird from a backup

    - by Eric
    Setting up a new Windows 7 PC, I'm puzzled by two things in Thunderbird 3.1.9: I restored a profile from a three-month old backup, no problem. I then copied more recent files into the Mail/ directory, but TBird still shows the old messages. The last message in Inbox is dated 3/16/2011 -- how do I get TBird to display all the messages in the Local Folders/Inbox view? A large number of the existing messages are now displayed in separate tabs -- I can't tell you how many, but there could be over 1000. Which file governs this? Or can I hire someone from Mechanical Turk to come over and manually close each tab?

    Read the article

  • How do you backup your localhost ?

    - by justjoe
    i have method to backup my work on localhost based on week basis. i use multipe dos command and save in on a bat file. i use command such as copy and xcopy and save my localhost to another place. After my server grow larger, i think it take too much space. So tehre is a way to solve this problem ? maybe a software that can track changes on our php code or another method to preserve your code when thing go bad ? EDIT : I use windows xp sp2, on XAMPP Apache PHP 5.2.1 the localhost refer to my laptop. i install the localhost server here

    Read the article

  • What cloud backup solution supports a "backup server to cloud" configuration?

    - by Gepeto
    What online backup tool allows you to: A) Back up Windows, Linux and optionally Mac desktops and servers to the cloud B) Do so by first backing up to a central server or appliance C) Allow restoring from that appliance when possible and if not go to the cloud For now the best option I have seen is i365 by seagate with an appliance between the local computers and the cloud. I know Microsoft also has an i365 plugin for DPM, as well as an Iron Mountain plugin. However, I feel that there must be a simpler way to do this. Can any of the "simpler" solutions like Jungle Disk or anything else going to s3, Mozy, Carbonite, Crashplan, etc do this? Thank you

    Read the article

  • What can cause SQL 2008 Transaction Log Shipping to stop functioning?

    - by Rick
    I read somewhere that doing a backup or when Maintenance Plan runs can cause Log Shipping to stop functioning. Is this true? What should we watch out for once our Transaction Log Shipping is in place that could stop it? A Log Shipping test we were doing between two databases on the same SQL 2008 server appeared to stop working without any error. When we checked the History of the LSRestore_* job it was always ignoring the new *.trn files. Any suggestions? Thanks.

    Read the article

  • OSX Time Machine: deletion of backup folders

    - by jml
    I saw this question and was hoping that someone could expand upon the chosen answer (which I understood): Can you sudo mv Time Machine backup files as sudo from the trash to their original locations? I have tried doing this as root to no avail (operation not permitted). If not, can you successfully rm them via the trash via the terminal, faster than what the endless 'preparing to empty the trash' dialog suggests, and If you get the files back out of the trash can you tell if they are intact via disk utility (and how) Can you force indexing on a Time Machine drive in the same way that you would a normal drive to rebuild the TM index? I realize that a single answer could clarify all of the above, but I wanted to include details to be clear on what I am asking. Thanks for any help.

    Read the article

  • rsync bash script to backup specific directories nightly to remote server

    - by Janice Young
    Hello, I am looking for a rsync script that will backup specific directories from my home machine to a remote server nightly. So say: /home/me/Pictures to ssh -p 6587 [email protected]/Pictures. It would be nice if it can look for changes but im not worried so much about the changes aspect is having a script that runs at a certain time of night with cron or however. I have googled and found scripts but those scripts were specific to the operations of those creators. Any help would be happily accepted as the scripted part really throws me off. Thank you, Janice

    Read the article

  • Windows 2008 Domain Controller - Backup (BDC) to Primary (PDC)

    - by Klaptrap
    I have created a new domain controller with my single domain forest. I have also made it DHCP and DNS ready - all 3 services have synchronised with the existing W2K8 domain controller. I even migrated the FSMO roles and thought everything was fine. Indeed all machines on network appear to obtain DHCP and DNS from new server and the AD is working on the new server as my internal website uses it for login authentication. I have just noticed, via BgInfo - Sys Internals - that the new server is showing as "backup" and the old as "primary" - I thought I had already achieved this. Have the FSMO roles swapped back - as I have yet to have removed the old server from AD (dcpromo). Do I need to do anything before I run dcpromo on the old server? Any thoughts appreciated....

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >