Search Results

Search found 22526 results on 902 pages for 'multiple databases'.

Page 330/902 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • LDAP RBAC model

    - by typo
    Hi does anybody can tell me about best practice to model RBAC on LDAP ? I'm very confused, not sure if I should think about LDAP groups as role, or just user in some custom OU. Any real-life examples with tasks-operations\roles\user scheme (one user, multiple roles per user, multiple operations-tasks per role) ? BTW:Target systems are .net, java and iSeries

    Read the article

  • Modelling interiors with Google Sketchup PhotoMatching

    - by rzlines
    I have come across multiple tutorials explaining how to model structures with Google SketchUp. Are there any tutorial series explaining how to do interiors using Photo Matching and multiple photos. The tutorial should ideally cover the entire process, including the right way to take the photos of the place to modelling the furniture and the walls. Does anyone know of any video series or any literature on the same. Please direct me to the right place

    Read the article

  • Sort highest match in excel

    - by Chris
    Hello, How can in Excel sort multiple collumns with data? Column B = Subscribe date Column A = Subscribe name I have multiple columns with a lot of doubles names (A) and different subscribe dates(B) How can this be sorted that all names are sorted, but the highest subscribe date is flagged as HIGHEST in column C. In this way you can see directly wich is the highest date.

    Read the article

  • The function of service principal names in Active Directory

    - by boxerbucks
    I am thinking about taking a service that runs on multiple servers in my domain currently as "NETWORK SERVICE" and configuring it to run as an AD domain account for various reasons. If I have this one account running the same service under multiple servers, do I need to create SPN's for each of the machines and services it runs? Would I need to worry about creating SPN's at all? If the answer is no, then what is the proper role of an SPN?

    Read the article

  • QEMU & libvirt: Dual screen VM?

    - by thecapsaicinkid
    I'm running a Windows 7 guest virtualised under QEMU and using libvirt on a Fedora 17 host, does anyone know if it's possible to create 2 video 'cards' (or a single dual-head) and a spice server for each and connect multiple spice clients to simulate a dual screen VM? Looking through the VMs xml configuration file I can't see a way to associate elements (eg. Spice servers) to elements (in this case, a qxl card). I'm sure qxl doesn't support the dual-head option but are multiple video elements possible?

    Read the article

  • Is there a webmail client which can handle mutiple mail accounts ?

    - by Tanmoy
    Hi, Everyone here must be aware of mail clients which can handle multiple email accounts simultaneously(eg. Thunderbird or Outlook). These are desktop clients which should my installed on the clients computer.Suppose i don't have harddisk, now how will i access my multiple mail accounts on web using a single application ? Thanks in advance, Tanmoy

    Read the article

  • How to extract file paths out of drag and drop event?

    - by trismarck
    I have this application that lists files in a WindowsForms listbox (NET framework). The application does not support the copy operation if multiple files are selected in the listbox, but at the same time, the application supports 'drag and drop' event for multiple files (allows dragging the files 'out of the application'). How can I extract the paths of the files 'dragged out of the application'? (i.e. I drop the files on some program / script that shows me the paths / saves the paths to a txt file).

    Read the article

  • Best ASP.Net Host for Developers

    - by Tyler
    I would need it to allow me to host subdomains and multiple domains is a huge plus. Required: ASP.NET 2.0, 3.0, 3.5 Subdomain Hosting MS-SQL & MySQL Databases Want Multiple Domain Hosting ASP.NET 4.0 Ability to directly connect to MS SQL using SQL SMS So what do you have for me SF?

    Read the article

  • Idempotent Powershell word search/replace across documents with headers, change tracking, etc

    - by user61633
    I've found one or two guides to doing a word search and replace across multiple documents with powershell. They work well on simple documents. However, the script ignores text in headers and footers; and if "track changes" is enabled, it replaces text which has already been replaced, resulting in multiple copies of the new text if I run the script more than once on the same file. Any clues as to how I can avoid these undesirable behaviors and make this script robust?

    Read the article

  • VPN connects but no remote lan access

    - by Macros
    I have a PPTP_ VPN setup through Windows to a Cisco (LinkSYS) router, which was working fine up until yesterday. Now I can connect, and I can ping the router however I am unable to reach any of the machines on the remote network. I've tried this from multiple PCs on Multiple networks / internet connections wuth the same results. Any ideas what may have caused this?

    Read the article

  • Switching efficiently between windows, not apps, in OS X

    - by Vultan
    Previous questions have asked "how can I efficiently switch between windows, not applications, in OS X"? (Switching windows on OS X, Switch between windows on Mac OS X? and others). The most recommended suggestions seem to be: Use some combo of cmd-tab and cmd-~. Use Expose, and possibly Spaces Use Witch I spent the money on Witch, and have been using it for a few weeks; it's ok, but it is sometimes slow to respond, sometimes buggy on window order, crashes my system if I disable and re-enable it too many times, and doesn't work properly with X11 apps. The built-in cmd-tab and cmd-~ are ok, but still bring an entire application to the forefront. I find a very common workflow I use is to bounce back and forth between two windows (for example, a browser window and a Thunderbird email in progress), when both apps (the browser and email software) have multiple windows open. I can use Cmd-Tab to get back and forth between apps, but whenever I switch to an app, ALL windows from that app pop up. That suddenly fills my screen with irrelevant data and windows, and often drops those other windows in front of the single window from the other app that I was using and would conveniently like to keep viewing even though it isn't in focus. Expose seems to be the preferred "OS X natural way," but I can't seem to get myself to use it efficiently. I hit F9, and see 10 windows; I then need to squint, try to find the window I want, then use the mouse or the cursor keys to navigate to the one I want. Given the number of power users who say they use Expose, I must be missing the boat here. My goal is not to make this a repeat of previous questions. I'm not asking "what are my alternatives?" (unless I've missed one above!) Rather, I'm asking: what are you, OS X power users, actually doing to handle the use case I described above? Another common use case for me is having multiple Excel spreadsheets open and multiple browser windows open, and I'm rapidly switching back and forth between one spreadsheet in particular and one browser window. Every time I Cmd-Tab, all spreadsheets or all browser windows appear: I don't want to see the ones I'm not working with, and they tend to hide the windows from the alternative app that I don't have in focus but I'd like to at least eyeball. Can you describe what your workflow is like, and how you rapidly and thoughtlessly switch between windows from apps that have multiple windows open?

    Read the article

  • Idempotent Powershell word search/replace across documents with headers, change tracking, etc.

    - by user61633
    I've found one or two guides to doing a word search and replace across multiple documents with powershell. They work well on simple documents. However, the script ignores text in headers and footers; and if "track changes" is enabled, it replaces text which has already been replaced, resulting in multiple copies of the new text if I run the script more than once on the same file. Any clues as to how I can avoid these undesirable behaviors and make this script robust?

    Read the article

  • Sound has stopped working after upgrading to Windows 8

    - by Max
    I upgraded my (fairly new, about 2 months old) computer to Windows 8 last night, and now the sound has stopped working across the entire machine. I have checked using multiple programs (iTunes, YouTube, World of Warcraft, etc) and multiple output devices (speakers and headset), checked the soundcard drivers (Asus Xonar DG 5.1), checked the volume mixer to ensure I wasn't just having a brainfart and had the sound muted, but nothing's working. Does anyone have any advice on what could be causing this? Thanks Max

    Read the article

  • How to set up a home SIP Server/Proxy for multi ring?

    - by zio
    I have a sip account which only allows one device to be registered. When i'm at home I want incoming calls to be able to ring on multiple devices. All of these devices are connected to the local network. I'm guessing the way to do this is using a local server/proxy that would allow multiple registrations which then forwards traffic to/from my sip provider. What a simple way to do this on either OS X, Ubuntu or using some low cost SIP router hardware?

    Read the article

  • Phone solution for virtual company

    - by EJB
    I am looking for recommendations/links for a service that can give/assign me a phone number, have a recorded messages played when someone calls such as "press 1 for ..., press 2 for yyy etc" and then allow the caller to leave a message that is then emailed to the owner of the particular voicemail box. Google voice works for 1 mailbox only, but something like that with multiple mailboxes and multiple email addresses would be great.

    Read the article

  • Apache restart on every request

    - by Michael Gummelt
    In development, I'd like to have changes to my application propagate immediately. "MaxRequestsPerChild 1" restarts each process after a request, but if there are multiple server processes, changes still don't propagate until each process restarts. I've tried several different directives to limit the number of server processes to 1: StartServers 1 MinSpareThreads 1 MaxSpareThreads 1 ThreadLimit 1 ThreadsPerChild 1 MaxClients 1 MaxRequestsPerChild 1 Apache still starts with multiple (3) apache2 processes. I'm using the mpm_worker module

    Read the article

  • Monitoring Active Directory (AD) Replication in Windows Server 2008 R2

    - by Kyle Brandt
    With Active Directory, what is a good way to monitor replication? I have multiple sites and multiple locations, so ideally both replication between sites and within sites would be monitored. I'm not really sure if each DC needs to be monitored, each NTDS connection, or each DC * Each NTDS connection. For the purposes of fitting into a standard alerting methodology, perfmon counters that would allow me to alert if replication was behind X minutes seems like it might be ideal.

    Read the article

  • Change Active Taskbar Application

    - by MosheK
    I currently have the Buttons on other taskbars set so Never combine So I can have multiple Chrome windows open and will see multiple Chromes on the taskbar. Now the problem is that sometimes I don't really know which one I'm currently looking at. Is there a way to have the active application take on a different color or some other indication that it's the active window. To clarify I don't need the title bar of the application to be a different color, just the "button" that shows up on the taskbar This is for windows 8

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Unique identifier for an email

    - by Skywalker
    I am writing a C# application which allows users to store emails in a MS SQL Server database. Many times, multiple users will be copied on an email from a customer. If they all try to add the same email to the database, I want to make sure that the email is only added once. MD5 springs to mind as a way to do this. I don't need to worry about malicious tampering, only to make sure that the same email will map to the same hash and that no two emails with different content will map to the same hash. My question really boils down to how one would combine multiple fields into one MD5 (or other) hash value. Some of these fields will have a single value per email (e.g. subject, body, sender email address) while others will have multiple values (varying numbers of attachments, recipients). I want to develop a way of uniquely identifying an email that will be platform and language independent (not based on serialization). Any advice?

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >