Search Results

Search found 31902 results on 1277 pages for 'sql backup'.

Page 416/1277 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • ESXi 4.0u1 - backup/copy options

    - by Hanadarko
    I have a machine built using ESXi 4.0u1 and it has 3 hard drives. I have my hosts built on different hard drives but wondering about backup options. I do not have RAID but I have 3 drives and 1 is totally empty. I had been using it to store ISOs for loading. So what options do I have to either create a 1 time copy onto the spare drive or some sort of snapshot to the spare drive? - There must be some way to do this either via the vsphere client or ssh into the ESXi box and go from there. Thoughts? -JD

    Read the article

  • Anyway to backup nginx before recompiling

    - by JM4
    I am looking to install the HttpGeoipModule for NGINX but learning I have to recompile the entire thing from source in order to do so. I have a new Media Temple DV 4.0 server and that comes with nginx v 1.3.0 stock and have never had to recompile from source before and a bit nervous to make changes without being able to revert to a previous state in the event something messes up (that and the fact it is affecting a live server so no idea what downtime is). My plan was to copy all the existing modules used (nginx -V to list them all and copy the modules already compiled). Then rebuild from source with the copied info above and including the ./configure --with-http_geoip_module reference. Is is possible to backup the existing nginx configuration in the event something goes wrong?

    Read the article

  • Two mail servers, need help with dns configuration for the backup one

    - by user92231
    I need to run a redundant backup mail server in case the main one goes down. The settings in GoDaddy look something like the following: A (Host) Host Points to @ ip address of mail1 41.x.x.x mail1 ip address of mail1 41.x.x.x mail2 ip address of mail2 196.x.x.x MX Priority host points to 10 @ mail1.mydomain.com 20 @ mail2.mydomain.com When mail1 goes down, mail2 is able to get emails. I can access it through the browser with no problem, but I want my users to able to pop3/smtp as well without changing anything in their outlook. I dont want any impact to the users when mail1 is down. Also, I'm using the windows server DFS to keep both folders of the mails in sync. Is this the right way, or should I be using something else?

    Read the article

  • How to prevent Windows Home server waking for backup

    - by Andronicus
    Since changing the motherboard in my Windows home server it has been switching itself on every night to perform a scheduled backup. I don't want this to happen though, I want to switch the server on using Wake on lan, but I want to manually perform backups if/when required. I know I can disable backups of each computer, and the machine wont turn on, but I want to allow manually triggered backups, or automatic backups if the homeserver is on, but if its in sleep mode, I wish it to stay in that state unless I wake it. How can this be achieved?

    Read the article

  • Windows 2008 Domain Controller - Backup (BDC) to Primary (PDC)

    - by Klaptrap
    I have created a new domain controller with my single domain forest. I have also made it DHCP and DNS ready - all 3 services have synchronised with the existing W2K8 domain controller. I even migrated the FSMO roles and thought everything was fine. Indeed all machines on network appear to obtain DHCP and DNS from new server and the AD is working on the new server as my internal website uses it for login authentication. I have just noticed, via BgInfo - Sys Internals - that the new server is showing as "backup" and the old as "primary" - I thought I had already achieved this. Have the FSMO roles swapped back - as I have yet to have removed the old server from AD (dcpromo). Do I need to do anything before I run dcpromo on the old server? Any thoughts appreciated....

    Read the article

  • Backup tape compression

    - by pufferfish
    What things should I check to confirm that compression is actually happening on our tape backup system? Although the tapes are marked as 200G/520G (native/compressed) capacity, they seem to fill up before the 200G mark (some less than 100G). I'm using - Sony AIT-4 tape autochanger - Sony SDX4-200C (AIT-4) tapes - Ubuntu Lucid - Bacula I've tried checking hardware compression with: tapeinfo -f /dev/nst0, which gives Product Type: Tape Drive Vendor ID: 'SONY ' Product ID: 'SDX-900V ' Revision: '0102' Attached Changer API: No SerialNumber: '0001000036' MinBlock: 2 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: Not Loaded Density Code: 0x33 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x3 DeCompType: 0x3 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: 201778000 Partition 0 Size in Kbytes: 201779000 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 0 MaxPartitions: 0 ... so I presume it's on. Notes: The Bacula documentation says hardware compression needs to be enable with "system tools such as mt"

    Read the article

  • Taking an image backup of an entire server?

    - by WarDoGG
    I am currently using a dedicated server for my hosting needs. However, the costs are too high and I would like to suspend everything until I work out my business strategy again. Is there a way I can take a complete backup of the filesystem and run it in VMWare ? I cannot just copy the entire filesystem because there are lots of tools installed and tight changes to the server configuration files I myself dont know about (by the developers), but I need a snapshot of the entire disk image along with processes installed and everything is as is because for development needs, I need to work on this copy in VMWare or VirtualBox etc. Is it possible for me to take a full image copy ? How do I do it ?

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Windows Task Scheduler won't let me uncheck "Wake the computer" option for a backup task

    - by KdawgUD
    I have a problem with my windows 7 laptop automatically waking after I put it to sleep and then I find it later with the battery drained. I tracked down the culprit using the "powercfg -lastwake" command to be a Backup task in the "Windows Server" section of the task scheduler. I have tried unchecking the "Wake the computer to run this task" checkbox for this task, but after I do this and reboot, the box is always rechecked again. How can I make this setting persist? I have full admin rights to this laptop, but it is on a domain. Edit: I looked into the domain policy settings as suggested by Dave below and did not find any policies related to scheduled task settings. Any other ideas?

    Read the article

  • 2 Nics - Same Subnet..Route backup traffic through one of them

    - by Matthewhall58
    I have a windows server and a centos linux server. I want the nightly backup file (targz) that copies to the windows machine to use a different nic so that the main nic is not burdenend with moving the large file. Each server has 2 Nics in it. the network is 10.173.10.0 mask 255.255.255.192 Centos Linux Box: Eth0 is configured with 10.173.10.80 mask 255.255.255.192 gw 10.173.10.65 Eth 1 is configured with 10.173.10.71 mask 255.255.255.192 gw 10.173.10.65 Windows Box Eth 0 is 10.173.10.72 mask 255.255.255.192 gw 10.173.10.65 Eth 1 is 10.173.10.70 mask 255.255.255.192 gw 10.173.10.65 I can ping each machine from each machine. On the linux machine I use the command route add -host 10.173.10.70 dev eth1 but then when i ping 10.173.10.70 it is unreachable..... WHY?

    Read the article

  • Backup Mongodb on EC2 through EBS snapshots - timing issue

    - by DmitrySemenov
    I'm following this guidance http://docs.mongodb.org/ecosystem/tutorial/backup-and-restore-mongodb-on-amazon-ec2/ I have 4 EBS 1000 IOPS volumes assigned to instance These 4 volumes through MDADM assembled into software RAID10 array. I want to do backups through EBS Snapshots as explained in the article above Question: Mongodb says - that I need to mongo shelldb.runCommand({fsync:1,lock:1}); -- this will lock the db for writing ....run snapshot creation... mongo shell db.$cmd.sys.unlock.findOne(); -- this will unlock the db for writing so do I need to unlock the DB for writing after I issued the comand ec2-create-snapshot or after it's finished and the actual snapshot is created thanks, Dmitry

    Read the article

  • Use Amazon EC2 as a backup server

    - by MikeMurko
    I would like to use Amazon EC2 as an emergency backup database+web server in the event our primary host becomes unavailable. I feel like I wouldn't have trouble setting up a Windows instance, install SQL Server and get the web server up and running (would take a few hours, plus installing various libraries, our source code, etc). My question relates to pricing. If I simply "stop" the instance rather than "terminate" it, does that stop counting "instance-hours"? I would prefer not to terminate the instance and lose all that work I spent setting it up. If I must "terminate" in order to stop the billing - is it possible to make an image of the server after I have set it all up, then save that image somewhere (S3?) Is this something that people do regularly? Ideally this instance would just be waiting in the wings for an issue with our host, but costing us nothing except perhaps data storage costs.

    Read the article

  • Configuring a backup DNS server

    - by mattyh88
    I would like to setup a backup DNS server. I have added all my name servers in my domain name panel. (ns1.domain.com, ns2.domain.com, etc ...) If a person would try to go to domain.com and the first name server would fail, will it automatically try and use ns2.domain.com? All my DNS servers have the same master zones configured. Is that the way to go? Is it that easy or am I missing something here? :)

    Read the article

  • "SIOCSIFADDR: No such device" after restoring backup

    - by Paul Tomblin
    I bought some new hardware, and tried to restore my backup on it. When I boot, I don't get a network connection. If I type "ifup eth0" on the command line, I see the messages: SIOCSIFADDR: No such device eth0: No such device lspci shows an ethernet controller (Intel 82546GB). ifconfig does not show any controller except loopback. I tried installing bare Debian on the machine and the network worked then, but now I want to make it like my old machine was. Googling this problem only seems to find people having this problem in VMs. I'm not in a VM.

    Read the article

  • Oracle backup and recovery

    - by kupa
    During recovery Oracle writes the following error: RMAN-06054: media recovery requesting unknown log: thread 1 seq 9 lowscn 4034762 I have used in mount mode this command: change archivelog all crosscheck; delete expired archivelog all; Then restore and tried to recover again:But still RMAN-06054 error.Than I wrote: run{ SET UNTIL SEQUENCE 9 THREAD 1; RESTORE DATABASE; RECOVER DATABASE; } It helped me to recover database...But after that when I do the backup and then recover the same error occurs and solution is the same... I wonder to solve this problem without SET UNTIL SEQUENCE 9 THREAD 1; maybe I should unregister this archive log from control file(I am using control file not catalog) Can you tell me how?

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Visio 2010 forward engineer add-in for office 2010

    - by Ryan Ternier
    I have been scouring the internet for ages trying to see if there was a usable add-on for Visio 2010 that could export SQL Scripts. MS stopping putting that functionality in Visio since 2003 – which is a huge shame. Today I found an open source project from Alberto Ferrari. It’s an add-in for Visio 2010 that allows you to generate SQL Scripts from your DB diagram. It’s still in beta, and the source is available.   Check it out here:http://sqlblog.com/blogs/alberto_ferrari/archive/2010/04/16/visio-forward-engineer-addin-for-office-2010.aspx This saves me from having to do all my diagramming in SQL Server / VS 2010. And brings back the much needed functionality that has been lost.

    Read the article

  • Deploying SSIS to Integration Services Catalog (SSISDB) via SQL Server Data Tools

    - by Kevin Shyr
    There are quite a few good articles/blogs on this.  For a straight forward deployment, read this (http://www.bibits.co/post/2012/08/23/SSIS-SQL-Server-2012-Project-Deployment.aspx).  For a more dynamic and comprehensive understanding about all the different settings, read part 1 (http://www.mssqltips.com/sqlservertip/2450/ssis-package-deployment-model-in-sql-server-2012-part-1-of-2/) and part 2 (http://www.mssqltips.com/sqlservertip/2451/ssis-package-deployment-model-in-sql-server-2012-part-2-of-2/) Microsoft official doc: http://technet.microsoft.com/en-us/library/hh213373 This only thing I would add is the following.  After your first deployment, you'll notice that the subsequent deployment skips the second step (go directly "Select Destination" and skipped "Select Source").  That's because after your initial deployment, a ispac file is created to track deployment.  If you decide to go back to "Select Source" and select SSIS catalog again, the deployment process will complete, but the packages will not be deployed.

    Read the article

  • SQL Server Capacity Planner

    Apart from the capacity planner tool for System Center and SharePoint Server, I was looking for a tool which can help me to estimate the capacity of SQL Server. I found an article on Microsoft.com for SQL Server 2000 sizing but unfortunately the links are obseleted and dead: Dell PowerMatch Server Sizing Software Compaq Active Answer Resources Finally I found an article that is "close" to my interest: Hardware and Software Requirements for Installing SQL Server 2008 If any of you...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft SQL Server 2008 R2 Release

    - by Leonard Mwangi
    Microsoft is planning to release the second edition of SQL Server 2008, the new edition will named SQL Server 2008 R2 due to be released by May 1st 2010.   Amongst the change on the edition is pricing which is anticipated to go up by 25% for the Standard Edition and about 15% for the Enterprise Edition. As for the features, there are some very cool additions  including PowerPivot for SharePoint, Master Data Services and Multi-Server Administration. There is also enhancements on the Database Engine, Reporting Services and Installation Process.    More information can be found at http://msdn.microsoft.com/en-us/library/bb500435(SQL.105).aspx   Have a happy Upgrade

    Read the article

  • SQL Server Modeling CTP (November 2009 Release 3) for Visual Studio 2010 RTM Now Available

    Here's what Kraig has to say about the November 2010 SQL Server Model CTP that matches the RTM of Visual Studio 2010: A update of the SQL Server Modeling CTP (November 2009) that's compatible with the official (RTM) release of Visual Studio 2010 is now available on the Microsoft Download Center.  This release is strictly an updated version of the original November 2009 CTP release to support the final release of Visual Studio 2010 and .NET Framework 4. SQL Server Modeling Nov09 CTP Release...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLite with two python processes accessing it: one reading, one writing

    - by BBnyc
    I'm developing a small system with two components: one polls data from an internet resource and translates it into sql data to persist it locally; the second one reads that sql data from the local instance and serves it via json and a restful api. I was originally planning to persist the data with postgresql, but because the application will have a very low-volume of data to store and traffic to serve, I thought that was overkill. Is SQLite up to the job? I love the idea of the small footprint and no need to maintain yet another sql server for this one task, but am concerned about concurrency. It seems that with write ahead logging enabled, concurrently reading and writing a SQLite database can happen without locking either process out of the database. Can a single SQLite instance sustain two concurrent processes accessing it, if only one reads and the other writes? I started writing the code but was wondering if this is a misapplication of SQLite.

    Read the article

  • mssql or mysql: learning

    - by Yehuda
    I have been using MySQL for about 9 months now for websites, and i have become quite good in getting what I want out of the Database. However i am still missing most of the complicated parts. I have an excellent tutorial but it is on sql-server 2008. 1) Is it worth me switching over to mssql (I understand the SQL is different) so that I will learn all about SQL and databases in general? 2) Do most people use MySQL or MSSQL 3) What is best practice, and I am talking mainly for websites.

    Read the article

  • SQLAuthority News SQL Server Technology Evangelists and Evangelism

    This is the exact conversation that I had with three people during the recent SQL Server Public Training. Person 1: “Are you an SQL Server Evangelist?” Pinal : “No, but Vinod Kumar is.” Person 1: “Who are you? Person 2: “He is Pinal, haha!” Person 1: “I know that, but dont you evangelize SQL Server [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >