Search Results

Search found 34699 results on 1388 pages for 'database backup'.

Page 91/1388 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Javier Puerta
    Broadens Support for Managing Full Lifecycle of New Pluggable Database as a Service Redwood Shores, Calif. – November 4, 2013 News Summary Database as a Service (DBaaS) offers organizations accelerated deployment, elastic capacity, greater consolidation efficiency, higher availability and lower overall operational cost and complexity. Oracle Database 12c provides an innovative multitenant architecture featuring pluggable databases that makes it easy to offer DBaaS and consolidate databases on clouds. To support customers’ move to this model, Oracle Enterprise Manager 12c adds new automation capabilities to enable quick provisioning of database clouds through self-service, saving administrators time and effort. These new capabilities can help customers adopt Oracle Database 12c faster and pave the way to a DBaaS delivery model. News Facts Oracle today announced a new release of Oracle Enterprise Manager 12c, which provides a turnkey, full lifecycle DBaaS management solution for Oracle Multitenant, an option for Oracle Database 12c Enterprise Edition. Read full press release here

    Read the article

  • SQL SERVER Fix : Error : 3117 : The log or differential backup cannot be restored because no files

    I received the following email from one of my readers.Dear Pinal,I am new to SQL Server and our regular DBA is on vacation. Our production database had some problem and I have just restored full database backup to production server. When I try to apply log back I am getting following error. I am sure, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • BTrFS subvolume / snapshot question

    - by bumbling fool
    I think I'm having difficulty fully understanding subvolumes and snapshots. The /home partition is btrfs. I want to create a "backup" snapshot of /home/user (for example) but user has existed for years (previously ext4 btrfs-convert). I believe you can only make a snapshot of a subvolume. I checked and there are no "default" subvolumes already present. 1) Is there another way for me to backup /home/user other than creating a subvolume /home/user2 and copying everything from user to user2 in order to snapshot it?

    Read the article

  • How to Design a Relational Database; PASS Precon Swag and it’s “Symbolism”

    - by drsql
    Update! 10 more books added to the cadre from my friends at Red-Gate. With less than a week to go, I am starting to pack up for Charlotte and PASS 2013. I love that it is in Charlotte this year so I can drive and bring along some goodies to give away. Books and toys mostly, a variety of which were chosen rather specifically for some manner of symbolism with a tie in to database design for the most part. (Okay, symbolism is perhaps a bit of a stretch, but I have tied everything, even the goofy stuff,...(read more)

    Read the article

  • Which of the following relational database management systems would a company adopt (for migration), if any, MS Access, MS SQL Server or MySQL?

    - by Hassan Hagi
    Dear programmers, as part of my final year university project, I am conducting research into relational database management systems such as Microsoft Office Access 2007, Microsoft SQL Server 2008 and MySQL 5.1. The description does not need to be detailed however; I am trying to find empirical evidence and professional opinion/fact to determine which of the three databases are best suited for the required size of company (stated or unstated). OS: Microsoft windows (XP or newer) Please consider the following, but full details are not necessary: Memory management Migration Design constraints Integrity (data and others) Triggers User constraints Ease of use Performance Crash Recovery (not the operating system) Total Cost of Ownership (TCO) Also any info on Open source (to do with the three RDBMS) Thank you for your time and help. Hassan Hagi

    Read the article

  • NEW CERTIFICATION: Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning

    - by Brandye Barrington
    Oracle Certification announces the release of the new Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning certification. This certification is designed forDevelopers, Database Administrators and SQL developers who are proficient at tuning efficient SQL statements. This certification covers topics on core elements such as: identifying and tuning inefficient SQL statements, using automatic SQL tuning, managing optimizer statistics on database objects, implementing partitioning and analyizing queries. Beta testing for the Oracle Database 11g Release 2: SQL Tuning exam (1Z1-117) is now underway and thus is available at the greatly discounted rate of $50 USD. Visit pearsonvue.com/oracle and register for exam 1Z1-117. You can get all preparation details on the Oracle Certification website, including exam objectives, number of questions, time allotments, and pricing. QUICK LINKS: Certification Track: Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning Certification Exam: Oracle Database 11g Release 2: SQL Tuning (1Z0-117) Certification Website: About Beta Exams Register Now: Pearson VUE

    Read the article

  • Exchange backup size larger then file size

    - by bladefist
    My backupexec is setup to integrate with exchange, to backup the information store, versus just backing up the data file path. My exchange mdbdata folder is 17 gigs. But my backup exec is backing up 40 gigs worth of data. I have gone through it a million times, and it's strictly backing up exchange information store. I deleted all my backups and started over, to clear the incremental backup old data. Where is all this extra data coming from?

    Read the article

  • Cloud storage that works with rsnapshot?

    - by humbledude
    I’ve started using rsnapshot as my backup system for my home PC. I really like the idea of hard links and how they are handled. But I can’t find the best workflow. Currently I keep my snapshots on the same partition and will copy the newest snapshot to a pen-drive at the end of the week. Cloud storage is what I’m looking for. Dropbox doesn’t fit my needs, because there is no way to make Dropbox respect hard links — all snapshots are treated as full snapshots. Renting a server is pretty expensive, so my question is, are there better alternatives for backup in the cloud? I would like to benefit from hard links and send only incremental backups, just like I do with my local host.

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Database Security: The First Step in Pre-Emptive Data Leak Prevention

    - by roxana.bradescu
    With WikiLeaks raising awareness around information leaks and the harm they can cause, many organization are taking stock of their own information leak protection (ILP) strategies in 2011. A report by IDC on data leak prevention stated: Increasing database security is one of the most efficient and cost-effective measures an organization can take to prevent data leaks. By utilizing the data protection, access control, account management, encryption, log management, and other security controls inherent in the database management system, entities can institute first-level control over the widest range of protected information. As a central repository for unstructured data, which is growing at leaps and bounds, the database should be the first layer providing information leakage protection. Unfortunately, most organizations are not taking sufficient steps to protect their databases according to a survey of the Independent Oracle User Group. For example, any operating system administrator or database administrator can access the all the data stored in the database in most organizations. Without any kind of auditing or monitoring. And it's not just administrators, database users can typically access the database with ad-hoc query tools from their desktop and by-pass any application level controls. Despite numerous regulations calling for controls to limit the powers of insiders, most organizations still put too many privileges in the hands of their employees. Time and time again these excess privileges have backfired. Internal agents were implicated in almost half of data breaches according to the Verizon Data Breach Investigations Report and the rate is rising. Hackers also took advantage of these excess privileges very successfully using stolen credentials and SQL injection attacks. But back to the insiders. Who are these insiders and why do they do it? In 2002, the U.S. Secret Service (USSS) behavioral psychologists and CERT information security experts formed the Insider Threat Study team to examine insider threat cases that occurred in US critical infrastructure sectors, and examined them from both a technical and a behavioral perspective. A series of fascinating reports has been published as a result of this work. You can learn more by watching the ISSA Insider Threat Web Conference. So as your organization starts to look at data leak prevention over the coming year, start off by protecting your data at the source - your databases. IDC went on to say: Any enterprise looking to improve its competitiveness, regulatory compliance, and overall data security should consider Oracle's offerings, not only because of their database management capabilities but also because they provide tools that are the first layer of information leak prevention. Learn more about Oracle Database Security solutions and get the whitepapers, demos, tutorials, and more that you need to protect data privacy from internal and external threats.

    Read the article

  • Document Link about Database Features on Exadata

    - by Bandari Huang
    DBFS on Exadata Exadata MAA Best Practices Series - Using DBFS on Exadata  (Internal Only) Oracle® DatabaseSecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) E18294-01 Configuring a Database for DBFS on Oracle Database Machine [ID 1191144.1] Configuring DBFS on Oracle Database Machine [ID 1054431.1] Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1] - Verify DBFS Instance Database Initialization Parameters    DBRM on Exadata Exadata MAA Best Practices Series - Benefits and use cases with Resource Manager, Instance Caging, IORM  (Internal Only) Oracle® Database Administrator's Guide 11g Release 2 (11.2) E25494-02    

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Can only connect to file server on second attempt

    - by Ross Fleming
    I have a FreeNas file server on my local network and I usually connect to it from Windows and Ubuntu computers. Ever since I have upgraded from Ubuntu 12.04 to 12.10 Ubuntu will only connect after a second attempt. By which I mean, I will browse to it via the file manager and once I click on the link in "Bookmarks" it complains that it could not connect. If I then try again it connects successfully and will keep up it's connection until the laptop is suspended or looses connection to the LAN for whatever reason. This isn't much of a problem as I don't mind having to click twice but my real problem is that this means that my scheduled backup will complain that it cannot connect to the storage device if it has not already been accessed during the current session. If there is some way to either stop the issue all together or to force the backup tool (default) to immediately have a second attempt at connecting.

    Read the article

  • Entity Framework: Connecting to a mdf user database file via localDB during script execution

    - by Marko Apfel
    Problem If you run the “Generate database from model” wizard and execute the generated script the destination database could be the wrong one (for instance master of the SQL Server). Solution To use an own mdf attachable user database some connection information must specified during script execution. Execute your script opens the dialog “Connect to Server”. Press “Options” and go to the second tab “Connection Properties”. Select “Browse server” in the “Connect to database” dropdown box: Confirm the information dialog with “Yes”. In the following dialog you could choose your user database. Now the schema is created in the user database.

    Read the article

  • MySQL maintenance - how to clear the buffer?

    - by Dougal
    We have a server running our web app (PHP / MySQL) which is SLOW. My predecessor says that: "We use to do database maintenance, which use to clear the buffer, cached and unwanted variables." And I wonder what on earth he means with that statement? Does he mean a simple optimize of the tables? Or the query cache? I understand MySQL but don't really know what he is describing. I would appreciate any pointers. Thanks.

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Why are Back In Time snapshots so large?

    - by Chethan S.
    I just backed up the contents of my home partition onto my external hard drive using Back In Time. I browsed to the backed up contents in the external drive and under properties it showed me the size as 9.6 GB. As I read that in next snapshots I create, Back In Time does not backup everything but creates hard links for older contents and saves newer contents, I wanted to test it. So I copied two small files into my home partition and ran 'Take Snapshot' again. The operation completed within a minute - first it checked previous snapshot, assessed the changes, detected two new files and synced them. After this when I browsed to the backed up contents, I was surprised to see the newer and older backup taking up 9.6 GB each. Isn't this a waste of hard drive space? Or did I interpret something wrongly?

    Read the article

  • Domain controller failed to restore using windows backup tools

    - by Peilin
    Domain controller failed to restore using windows backup tools One Issue: One 08 R2 domain controller with fully daily backup(only one controller in this company) is out of services due to hardware issue. The below two methods i tried to recover to the new purchasing server,but it is fail. 1)First Method: Using the windows 2008 R2 CD to boot and carry out recover from backup. Everything is OK, but after reboot it will come out blue screen and restart again. 2)Second Method: a)Install the OS in this new server b) Reboot the server to DSRM. c)Using the Windows Backup Tools to restore the system states only After reboot, it will come out the blue screen error and restart again. I know this is may be the different hardware issue, but how to resolve? Or can we only restore the AD services not whole system status? Any suggestion?

    Read the article

  • How to exclude directories from Mozy custom backup sets using spotlight queries

    - by bromfiets
    I would like to create custom backup sets for Mozy which exclude certain directories. For example, I would like to backup my Itunes folder, but exclude all podcasts. I have created a backup set which searches in /Users/me/Music and used this query kMDItemPath == "*Podcasts*"wc to exclude all matching files. However, nothing matches. Queries which use the kMDItemFSName spotlight attribute work fine, but any query using kMDItemPath doesn't seem to work at all. What am I doing wrong?

    Read the article

  • Extracting httpdocs from Plesk Panel 9.5.4 Webserver backup file

    - by Paddington
    Good day, I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here: zcat DUMP_FILE.gz DUMP_FILE My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759. But when I try # cat CP_1204131759 | munpack I get an error that munpack did not find anything to read from standard input. I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones: CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?

    Read the article

  • Why does Hyper-V and Windows Backup crash (BSOD) after successfull backup?

    - by Payson Welch
    Hello I am running Server 2008 R2 with a handful of Hyper-V guest nodes. If Windows backup runs without any of the Hyper-V nodes running, the server is fine. If Hyper-V runs a backup while the Hyper-V nodes are running, it is fine until a few minutes after the backup completes, and then it BSODs. The storage location for the backup is iSCSI - I am wondering if anyone has any input on what might be causing this? I don't have the Hyper-V nodes setup on a vlan and there is only one NIC on the server. Is it possible this is a networking / driver issue, and if so how would I reconfigure the networking to fix this?

    Read the article

  • Issue with Windows Server backup

    - by mamu
    I have windows server 2008 r2 installed, Only service running on it is hyper-v. I am trying to take backup using windows server backup feature and it fails with following error in eventlog The backup operation that started at '?2009?-?08?-?22T18:42:14.123000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. Above error itself is point to other event logs for more detail but i can't find anything in event logs Then i ran following command vssadmin list writers It had following out of ordinary in list Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {d15c5f78-121c-464f-b23b-f285e919b05c} State: [8] Failed Last error: Inconsistent shadow copy How could i resolve this?

    Read the article

  • Is there an industry standard for systems registered user permissions in terms of database model?

    - by EASI
    I developed many applications with registered user access for my enterprise clients. In many years I have changed my way of doing it, specially because I used many programming languages and database types along time. Some of them not very simple as view, create and/or edit permissions for each module in the application, or light as access or can't access certain module. But now that I am developing a very extensive application with many modules and many kinds of users to access them, I was wondering if there is an standard model for doing it, because I already see that's the simple or the light way won't be enough.

    Read the article

  • New technical whitepaper on Database-as-a-Service

    - by Javier Puerta
    High Availability Best Practices for Database Consolidation- The Foundation for Database-as-a-Service. An Oracle White Paper - April 2014 This paper provides MAA best practices for Database Consolidation using Oracle Multitenant. It describes standard HA architectures that are the foundation for DBaaS. It is most appropriate for a technical audience: Architects, Directors of IT and Database Administrators responsible for the consolidation and migration of traditional database deployments to DBaaS.  Recommended best practices are equally relevant to any platform supported by Oracle Database except where explicitly noted as being an optimization or an example that applies only to Oracle Engineered systems. 

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >