Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 3/1276 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • extreme slowness with a remote database in Drupal

    - by ceejayoz
    We're attempting to scale our Drupal installations up and have decided on some dedicated MySQL boxes. Unfortunately, we're running into extreme slowness when we attempt to use the remote DB - page load times go from ~200 milliseconds to 5-10 seconds. Latency between the servers is minimal - a tenth or two of a millisecond. PING 10.37.66.175 (10.37.66.175) 56(84) bytes of data. 64 bytes from 10.37.66.175: icmp_seq=1 ttl=64 time=0.145 ms 64 bytes from 10.37.66.175: icmp_seq=2 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=4 ttl=64 time=0.144 ms 64 bytes from 10.37.66.175: icmp_seq=5 ttl=64 time=0.121 ms 64 bytes from 10.37.66.175: icmp_seq=6 ttl=64 time=0.122 ms 64 bytes from 10.37.66.175: icmp_seq=7 ttl=64 time=0.163 ms 64 bytes from 10.37.66.175: icmp_seq=8 ttl=64 time=0.115 ms 64 bytes from 10.37.66.175: icmp_seq=9 ttl=64 time=0.484 ms 64 bytes from 10.37.66.175: icmp_seq=10 ttl=64 time=0.156 ms --- 10.37.66.175 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.115/0.176/0.484/0.104 ms Drupal's devel.module timers show the database queries aren't running any slower on the remote DB - about 150 microseconds whether it's the local or the remote server. Profiling with XHProf shows PHP execution times that aren't out of whack, either. Number of queries doesn't seem to make a difference - we seem the same 5-10 second delay whether a page has 12 queries or 250. Any suggestions about where I should start troubleshooting here? I'm quite confused.

    Read the article

  • Insert into a star-schema

    - by shaun
    I've read a lot about star-schema's, about fact/deminsion tables, select statements to quickly report data, however the matter of data entry into a star-schema seems aloof to me. How does one "theoretically" enter data into a star-schema db? while maintaining the fact table. Is a series of INSERT INTO statement within giant stored proc with 20 params my only option (and how to populate the fact table). Many thanks.

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Create my own database system

    - by Xananax
    Ok so before I get bashed: I know it's something huge for one person; I don't care if the end product can actually be used or not. I need to learn how databases work in order to use them more efficiently, and my way of learning is by doing. So I want to create my own database system. I am not referring to creating a pseudo-database that would use query to parse files; this would simply be a filesystem interface with a query language. I am talking about the actual structure of a database engine. And since what I have in mind is neither relational nor document-oriented (it's "node-oriented", if that even exists), I would need any resource to be as abstract and high-level as possible. So how would I go about creating that? What resources/tutorials/books can I read to understand? The language does not matter in the slightest. Ideally, the code would be pseudo-code to illustrate the concept, not tied to a particular language, but anything would do. I was not able to find anything on the matter on google (since I am so illiterate on the subject, maybe I am just not entering the right search). If such resources are not available, then I guess something about how to create a client would at least be a step in the right direction.

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Can I use access used by Visual Basic for building a database [on hold]

    - by user3413537
    I am the only programmer where I work (summer job) and I am a student with only a few years of programming experience. So I was asked to build a database and I am very excited about this project because hopefully I can learn a lot from this. Using this database my manager is supposed to be able to assign work (dealing with businesses) to different people within the company using an interface (all workers have a shared drive). When workers are done with that paperwork related to the business, they can check off that its done, add comments at the bottom of the interface, and then move on to the next business. The only experience I've had with databases is some querying with SQL, and I've built GUI interfaces with JAVA. The information on the interface will be populated from Excel so workers know what businesses they are dealing with. I've done some research and I believe the best way to build this would be building a GUI using Microsoft Visual Studio (Visual Basic) first, then figuring out a way to populate the Interface from Excel. Also because the data is pretty straight forward and not complicated I will be using MS Access to store and track the database. I know this won't be easy, but for all you geniuses out there, is this on the right path? Thanks.

    Read the article

  • Star Schema vs Snowflake Schema performance

    - by Megawolt
    Hi... I'm begin to developing a scial sharing website so I'm curious about database design Schema... So in Data-Mining Star-Schema is the best one but how about a social sharing website... And as a nature of the SS websites there will be (i hope :)) many users in same time... Which better for performance for overdose using...

    Read the article

  • Oracle Database 12c: Partner Material

    - by Thanos Terentes Printzios
    Oracle Database 12c offers the latest innovation from Oracle Database Server Technologies with a new Multitenant Architecture, which can help accelerate database consolidation and Cloud projects. The primary resource for Partners on Database 12c is of course the Oracle Database 12c Knowledge Zone where you can get up to speed on the latest Database 12c enhancements so you can sell, implement and support this. Resources and material on Oracle Database 12c can be found all around Oracle.com, but even hidden in AR posters like the one on the left. Here are some additional resources for you Oracle Database 12c: Interactive Quick Reference is a multimedia tool for various terms and concepts used in the Oracle Database 12c release. This reference was built as a multimedia web page which provides descriptions of the database architectural components, and references to relevant documentation. Overall, is a nice little tool which may help you quickly to find a view you are searching for or to get more information about background processes in Oracle Database 12c. Use this tool to find valuable information for any complex concept or product in an intuitive and useful manner. Oracle Database 12c Learning Library contains several technical traininings (2-day DBA, Multitenant Architecture, etc) but also Videos/Demos, Learning Paths by Role and a lot more. Get ready and become an Oracle Database 12c Specialized Partner with the Oracle Database 12c Specialization for Partners. Review the Specialization Criteria, your company status and apply for an Oracle Database 12c Specialization. Access our OPN training repository to get prepared for the exams. "Oracle Database 12c: Plug into the Cloud!"  Marketing Kit includes a great selection of assets to help Oracle partners in their marketing activities to promote solutions that leverage all the new features of Oracle Database 12c. In the package you will find assets (templates, invitation texts, presentations, telemarketing script,...) to be used for your demand generation activities; a full set of presentations with the value propositions for customers; and Sales Enablement and Sales Support material. Review here and start planning your marketing activities around Database 12c. Oracle Database 12c Quick Reference Guide (PDF) and Oracle Database 12c – Partner FAQ (PDF) Partners that need further assistance with Database 12c can always contact us at partner.imc-AT-beehiveonline.oracle-DOT-com or locally address one the Oracle ECEMEA Partner Hubs for assistance.

    Read the article

  • SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big

    - by pinaldave
    After reading my earlier article SQL SERVER – master Database Log File Grew Too Big, I received an email recently from another reader asking why does the log file of model database grow every day when he is not carrying out any operation in the model database. As per the email, he is absolutely sure that he is doing nothing on his model database; he had used policy management to catch any T-SQL operation in the model database and there were none. This was indeed surprising to me. I sent a request to access to his server, which he happily agreed for and within a min, we figured out the issue. He was taking the backup of the model database every day taking the database backup every night. When I explained the same to him, he did not believe it; so I quickly wrote down the following script. The results before and after the usage of the script were very clear. What is a model database? The model database is used as the template for all databases created on an instance of SQL Server. Any object you create in the model database will be automatically created in subsequent user database created on the server. NOTE: Do not run this in production environment. During the demo, the model database was in full recovery mode and only full backup operation was performed (no log backup). Before Backup Script Backup Script in loop DECLARE @FLAG INT SET @FLAG = 1 WHILE(@FLAG < 1000) BEGIN BACKUP DATABASE [model] TO  DISK = N'D:\model.bak' SET @FLAG = @FLAG + 1 END GO After Backup Script Why did this happen? The model database was in full recovery mode and taking full backup is logged operation. As there was no log backup and only full backup was performed on the model database, the size of the log file kept growing. Resolution: Change the backup mode of model database from “Full Recovery” to “Simple Recovery.”. Take full backup of the model database “only” when you change something in the model database. Let me know if you have encountered a situation like this? If so, how did you resolve it? It will be interesting to know about your experience. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • Best approach for a database of long strings

    - by gsingh2011
    I need to store questions and answers in a database. The questions will be one to two sentences, but the answers will be long, at least a paragraph, likely more. The only way I know about to do this right now is an SQL database. However, I don't feel like this is a good solution because as far as I've seen, these databases aren't used for data of this type or size. Is this the correct way to go or is there a better way to store this data? Is there a better way than storing raw strings?

    Read the article

  • Is it necessary to create a database with as few tables as possible

    - by Shaheer
    Should we create a database structure with a minimum number of tables? Should it be designed in a way that everything stays in one place or is it okay to have more tables? Will it in anyway affect anything? I am asking this question because a friend of mine modified some database structure in mediaWiki. In the end, instead of 20 tables he was using only 8, and it took him 8 months to do that (it was his college assignment). EDIT I am concluding the answer as: size of the tables does NOT matter, until the case is exceptional; in which case the denormalization may help. Thanks to everyone for the answers.

    Read the article

  • Does schema.org improve SEO?

    - by marko
    http://schema.org This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google, Yahoo! and Yandex rely on this markup to improve the display of search results, making it easier for people to find the right web pages. It sounds wonderful, but does the search spider ignore the extra attributes and elements? Is it just too clever and ignores it? May it also be that it lowers your visibility because of such alteration?

    Read the article

  • ???????????I/O?SSD????!

    - by Yusuke.Yamamoto
    ????? ??:2010/11/25 ??:???? ?????????????????????I/O???????????????? Oracle Database 11g Release 2 ?????Database Smart Flash Cache?????????????????????????????????????????????????????? ???????????????????SSD????????????"?????(??)"???????????????????? Database Smart Flash Cache ???OLTP??+?????????????????OLTP??+OLTP???10????????? ????????? ????????????????? http://oracletech.jp/products/pickup/000076.html

    Read the article

  • Database Design for a double entry accounting system

    - by Khou
    Should journal entries be recorded in a database design? In the real world it makes sense to keep a daily entry book, then later transfer the daily entry book into double entry accounts. but in the computerized version, doing this produces duplicate records and that doesn't quite make sense? ???? What i mean is 1) user enter details , it gets recorded (this would be called the journalbook in real life) 2) the software does all the double entry accounting then references the journalbook and splits up the transaction into the double entry accounting system.

    Read the article

  • Database design suggestions for a configurable product eshop

    - by solomongaby
    Hello, I am biulding an e-shop that will have configurable products. The configurable parts will need to have different prices and stocks from the main product. What database design would be best in this case? I started with something like this. Features id name Features Options id id_feature value Products id name price Products Features id id_product id_feature value ( save the value from the feature-options for ease in search ) configurable (yes, no) The problem is that now I am stuck on how to save the configurable product features. I was thinking of saving their value as a json. But that will make saving price modification for a certain option difficult. How would you go about this ? Thank you.

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • storing map template in database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • Structuring database for multi-object "activity" and "following" functionalities

    - by romaninsh
    I am working on a web application which operate with different types of objects such as user, profiles, pages etc. All objects have unique object_id. When objects interact it may produce "activity", such as user posting on the page or profile. Activity may be related to multiple objects through their object_id. Users may also follow "objects" and they need to be able to see stream of relevant activity. Could you provide me with some data structure suggestions which would be efficient and scalable? My goal is to show activity limited to the objects which user is following I am not limited by relational databases. Update As I'm getting advices on ORM and how index things, I'd like to again, stress my question. According to my current design model the database structure looks like this: As you can see - it's quite easy to implement database like that. Activity and Follower tables do contain much larger amount of records than the upper level but it's tolerable. But when it comes for me to create a "timeline" table, it becomes a nightmare. For every user I need to reference all the object activities which he follows. In terms of records it easily gets out of control. Please suggest me how to change this structure to avoid timeline creation and also be abel to quickly retrieve activity for any given user. Thanks.

    Read the article

  • storing data for maps database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Open Source .Net Object Database or Document Database for use in Hosted environment

    - by runxc1 Bret Ferrier
    I am looking at creating a web site and I want to try and learn either a Object Database or a Document Database. I am going to be using a hosting provider so I won't be able to install any software. I am unable to purchase any licensing so I need to be able use either a free or open source Object/Document Database. Are there any free Object/Document Databases that don't require installation of some sort?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >