Search Results

Search found 41329 results on 1654 pages for 'oracle database vault'.

Page 255/1654 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Advantages of relational databases over VSAM, ISAM and hierarchical data stores

    - by llaszews
    When migrating companies from legacy environments to the cloud, invariably you run into older hierarchical, flat file, VSAM, ISAM and other legacy data stores. There are many advantages to moving these databases into a relational database structure. The most important which is that most cloud providers run on relational database models. AWS, for example, supports Oracle, SQL Server, and MySQL. The top three 'other reasons' for moving to a relational database are: 1. Data Access – Thousands of database access tools from query creation to business intelligence. 2. Management and monitoring – Hundreds of tools for management and monitoring of the database. 3. Leverage all the free tools from relational database vendors. Free Oracle database tools include: -Application Express – WYSIWIG browse based application development and deployment. -SQL Developer – SQL and PL/SQL development. Database object maintenance. What is interesting is that Big Data NoSQL databases and XML databases are taking us back to the days of VSAM (key value databases) with NoSQL and IMS (hierarchical) with XML databases?

    Read the article

  • From Oracle PL/SQL Developer to Java programmer - Is it a good decision? [on hold]

    - by user3554231
    I will explain my question in simple words. I have little over 1 year experience in Oracle. My dream is to be "called" as a 'Developer', be it database developer if not software developer. But right now I don't develop anything neither I am in good touch with PL/SQL and other Oracle Utilites like SQL*LOADER, shell scripting and stuff like that as I am only a System Analyst where I analyze and configure database using SQL queries. To be honest, I know very basic PL/SQL and good knowledge in SQL but that won't ever give me a chance to be a developer as I am lagging way behind the "real" developers knowledge. Now I feel I should learn JAVA as well so that I can cope up with the competition. But I am too scared to learn new things as it will take much more time which will indirectly increase my useless work experince(just analyzing) which values nothing in todays market. Moreover that, I am too lazy to work hard i.e. to study and not to work during office hours. To sum it up I am lazy and confused and scared but I want to learn things as well but don't know if I am intelligent enough to learn whole of PL/SQL or to master any other language. Is there any other way from which I can feel confident? Actually I even feel sometimes that after 2-3 years if I still don't achieve my goal, I won't ever be able to reach my destination. I just want to live my dream of being a developer. Give me some tips and hopes but not false hopes.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Using DEBUG Mode in Oracle SQL Developer to Log SQL

    - by thatjeffsmith
    Curious how we’re getting the data you see in SQL Developer when you click on something? While many of the dialogs provide a ‘SQL’ panel that shows you the SQL ABOUT to be generated, I’d rather see the SQL AS it’s executed. True, you could set a TRACE or fire up a Monitor Sessions report, but both of those solutions leave me hungry for more. Did you know that SQL Developer has a ‘debug’ mode? It slows the tool down a bit and spits out a lot of information you don’t care about, but it ALSO shows you ALL the SQL that is sent to the database, as you click around the tool! See ALL the SQL that SQL Developer sends to the database on your behalf Enable DEBUG Mode When you see the splash screen as SQL Developer fires up, frantically hit Up, Up, Down, Down, Left, Right, Left, Right, B, A, SELECT, Start. Wait, wrong game. No, all you need to do is go to your SQL Developer directory and navigate down to the ‘bin’ directory. In that directory, find the ‘sqldeveloper.conf’ file. Install Directory - sqldeveloper - bin - sqldeveloper.conf Open it with a text editor. Find this line IncludeConfFile sqldeveloper-nondebug.conf And replace it with this line IncludeConfFile sqldeveloper-debug.conf Save the file. Start up SQL Developer. Observe the Logging Page – Log Panel for the SQL There’s going to be more than just SQL here. You’ll actually see a LOT of other information. If you’re having general problems with the tool and you want to see the nitty-gritty of what’s going on, then this is a good place to satisfy your curiosity and might help us diagnose your issue if you post to the forums or open a ticket with My Oracle Support. You’ll find ‘INFO’ entries that look a little something like this - This is the query used to populate your Tables list in the connection tree. You can double-click on the sql text and get a pop-up window that’s much easier to read. See all that typing we’re saving you? I don’t recommend running in DEBUG mode all the time. Capturing this information and displaying it is more expensive than not doing so. And it provides a lot of information you don’t normally need to see. But when you DO want to know what’s going on and why, this is an excellent way of getting that information. When you’re ready to go back to ‘normal’ mode, just close SQL Developer, go back to your .conf file, and add the ‘nondebug’ bit back.

    Read the article

  • Basics of Join Predicate Pushdown in Oracle

    - by Maria Colgan
    Happy New Year to all of our readers! We hope you all had a great holiday season. We start the new year by continuing our series on Optimizer transformations. This time it is the turn of Predicate Pushdown. I would like to thank Rafi Ahmed for the content of this blog.Normally, a view cannot be joined with an index-based nested loop (i.e., index access) join, since a view, in contrast with a base table, does not have an index defined on it. A view can only be joined with other tables using three methods: hash, nested loop, and sort-merge joins. Introduction The join predicate pushdown (JPPD) transformation allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. In the join predicate pushdown transformation, the view remains a separate query block, but it contains the join predicate, which is pushed down from its containing query block into the view. The view thus becomes correlated and must be evaluated for each row of the outer query block. These pushed-down join predicates, once inside the view, open up new index access paths on the base tables inside the view; this allows the view to be joined with index-based nested-loop join method, thereby enabling the optimizer to select an efficient execution plan. The join predicate pushdown transformation is not always optimal. The join predicate pushed-down view becomes correlated and it must be evaluated for each outer row; if there is a large number of outer rows, the cost of evaluating the view multiple times may make the nested-loop join suboptimal, and therefore joining the view with hash or sort-merge join method may be more efficient. The decision whether to push down join predicates into a view is determined by evaluating the costs of the outer query with and without the join predicate pushdown transformation under Oracle's cost-based query transformation framework. The join predicate pushdown transformation applies to both non-mergeable views and mergeable views and to pre-defined and inline views as well as to views generated internally by the optimizer during various transformations. The following shows the types of views on which join predicate pushdown is currently supported. UNION ALL/UNION view Outer-joined view Anti-joined view Semi-joined view DISTINCT view GROUP-BY view Examples Consider query A, which has an outer-joined view V. The view cannot be merged, as it contains two tables, and the join between these two tables must be performed before the join between the view and the outer table T4. A: SELECT T4.unique1, V.unique3 FROM T_4K T4,            (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3) VWHERE T4.unique3 = V.hundred(+) AND       T4.ten = V.ten(+) AND       T4.thousand = 5; The following shows the non-default plan for query A generated by disabling join predicate pushdown. When query A undergoes join predicate pushdown, it yields query B. Note that query B is expressed in a non-standard SQL and shows an internal representation of the query. B: SELECT T4.unique1, V.unique3 FROM T_4K T4,           (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3             AND T4.unique3 = V.hundred(+)             AND T4.ten = V.ten(+)) V WHERE T4.thousand = 5; The execution plan for query B is shown below. In the execution plan BX, note the keyword 'VIEW PUSHED PREDICATE' indicates that the view has undergone the join predicate pushdown transformation. The join predicates (shown here in red) have been moved into the view V; these join predicates open up index access paths thereby enabling index-based nested-loop join of the view. With join predicate pushdown, the cost of query A has come down from 62 to 32.  As mentioned earlier, the join predicate pushdown transformation is cost-based, and a join predicate pushed-down plan is selected only when it reduces the overall cost. Consider another example of a query C, which contains a view with the UNION ALL set operator.C: SELECT R.unique1, V.unique3 FROM T_5K R,            (SELECT T1.unique3, T2.unique1+T1.unique1             FROM T_5K T1, T_10K T2             WHERE T1.unique1 = T2.unique1             UNION ALL             SELECT T1.unique3, T2.unique2             FROM G_4K T1, T_10K T2             WHERE T1.unique1 = T2.unique1) V WHERE R.unique3 = V.unique3 and R.thousand < 1; The execution plan of query C is shown below. In the above, 'VIEW UNION ALL PUSHED PREDICATE' indicates that the UNION ALL view has undergone the join predicate pushdown transformation. As can be seen, here the join predicate has been replicated and pushed inside every branch of the UNION ALL view. The join predicates (shown here in red) open up index access paths thereby enabling index-based nested loop join of the view. Consider query D as an example of join predicate pushdown into a distinct view. We have the following cardinalities of the tables involved in query D: Sales (1,016,271), Customers (50,000), and Costs (787,766).  D: SELECT C.cust_last_name, C.cust_city FROM customers C,            (SELECT DISTINCT S.cust_id             FROM sales S, costs CT             WHERE S.prod_id = CT.prod_id and CT.unit_price > 70) V WHERE C.cust_state_province = 'CA' and C.cust_id = V.cust_id; The execution plan of query D is shown below. As shown in XD, when query D undergoes join predicate pushdown transformation, the expensive DISTINCT operator is removed and the join is converted into a semi-join; this is possible, since all the SELECT list items of the view participate in an equi-join with the outer tables. Under similar conditions, when a group-by view undergoes join predicate pushdown transformation, the expensive group-by operator can also be removed. With the join predicate pushdown transformation, the elapsed time of query D came down from 63 seconds to 5 seconds. Since distinct and group-by views are mergeable views, the cost-based transformation framework also compares the cost of merging the view with that of join predicate pushdown in selecting the most optimal execution plan. Summary We have tried to illustrate the basic ideas behind join predicate pushdown on different types of views by showing example queries that are quite simple. Oracle can handle far more complex queries and other types of views not shown here in the examples. Again many thanks to Rafi Ahmed for the content of this blog post.

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Running Multiple Queries in Oracle SQL Developer

    - by thatjeffsmith
    There are two methods for running queries in SQL Developer: Run Statement Run Statement, Shift+Enter, F9, or this button Run Script No grids, just script (SQL*Plus like) ouput is fine, thank you very much! What’s the Difference? There are some obvious differences between the two features, the most obvious being the format of the output delivered. But there are some other, more subtle differences here, primarily around fetching. What is Fetch? After you run send your query to Oracle, it has to do 3 things: Parse Execute Fetch Technically it has to do at least 2 things, and sometimes only 1. But, to get the data back to the user, the fetch must occur. If you have a 10 row query or a 1,000,000 row query, this can mean 1 or many fetches in groups of records. Ok, before I went on the Fetch tangent, I said there were two ways to run statements in SQL Developer: Run Statement Run statement brings your query results to a grid with a single fetch. The user sees 50, 100, 500, etc rows come back, but SQL Developer and the database know that there are more rows waiting to be retrieved. The process on the server that was used to execute the query is still hanging around too. To alleviate this, increase your fetch size to 500. Every query ran will come back with the first 500 rows, and rows will be continued to be fetched in 500 row increments. You’ll then see most of your ad hoc queries complete with a single fetch. Scroll down, or hit Ctrl+End to force a full fetch and get all your rows back. Run Script Run Script runs the contents of the worksheet (or what’s highlighted) as a ‘script.’ What does that mean exactly? Think of this as being equivalent to running this in SQL*Plus: @my_script.sql; Each statement is executed. Also, ALL rows are fetched. So once it’s finished executing, there are no open cursors left around. The more obvious difference here is that the output comes back formatted as plain old text. Run one or more commands plus SQL*Plus commands like SET and SPOOL The Trick: Run Statement Works With Multiple Statements! It says ‘run statement,’ but if you select more than one with your mouse and hit the button – it will run each and throw the results to 1 grid for each statement. If you mouse hover over the Query Result panel tab, SQL Developer will tell you the query used to populate that grid. This will work regardless of what you have this preference set to: DATABASE – WORKSHEET – SHOW QUERY RESULTS IN NEW TABS Mind the fetch though! Close those cursors by bring back all the records or closing the grids when you’re done with them.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • New sales kit for partners: Oracle Enterprise Manager 12c

    - by Javier Puerta
    Check out the latest Quick Reference Guides for Enterprise Manager 12c in the Knowledge Zone. The two-page Quick Reference Guide is designed to help partners uncover additional revenue opportunity by positioning Enterprise Manager in your sales engagement. Content includes elevator pitch for Enterprise Manager, tips on identifying target customers, qualifying questions to initiate customers discussion, supporting videos, references, and whitepapers for each customer scenario: Enterprise Manager 12c for Application Partners Enterprise Manager 12c for Hardware Partners Enterprise Manager 12c for Database Partners

    Read the article

  • Coherence Special Interest Group: First Meeting in Toronto and Upcoming Events in New York and Calif

    - by [email protected]
    The first meeting of the Toronto Coherence Special Interest Group (TOCSIG). Date: Friday, April 23, 2010 Time: 8:30am-12:00pm Where: Oracle Mississauga Office, Customer Visitation Center, 110 Matheson Blvd. West, Suite 100, Mississauga, ON L5R3P4 Cameron Purdy, Vice President of Development (Oracle), Patrick Peralta, Senior Software Engineer (Oracle), and Noah Arliss, Software Development Manager (Oracle) will be presenting. Further information about this event can be seen here   The New York Coherence SIG is hosting its seventh meeting. Date: Thursday, Apr 15, 2010 Time: 5:30pm-5:45pm ET social and 5:45pm-8:00pm ET presentations Where: Oracle Office, Room 30076, 520 Madison Avenue, 30th Floor, Patrick Peralta, Dr. Gene Gleyzer, and Craig Blitz from Oracle, will be presenting. Further information about this event can be seen here   The Bay Area Coherence SIG is hosting its fifth meeting. Date: Thursday, Apr 29, 2009 Time: 5:30pm-5:45pm PT social and 5:45pm-8:00pm PT presentations Where: Oracle Conference Center, 350 Oracle Parkway, Room 203, Redwood Shores, CA Tom Lubinski from SL Corp., Randy Stafford from the Oracle A-team, and Taylor Gautier from Grid Dynamics will be presenting Further information about this event can be seen here   Great news, aren't they? 

    Read the article

  • SPARC Solaris Momentum

    - by Mike Mulkey-Oracle
    Following up on the Oracle Solaris 11.2 launch on April 29th, if you were able to watch the launch event, you saw Mark Hurd state that Oracle will be No. 1 in high-end computing systems "in a reasonable time frame”.  "This is not a 3-year vision," he continued.Well, According to IDC's latest 1QCY14 Tracker, Oracle has regained the #1 UNIX Shipments Marketshare! You can see the report and read about it here: Oracle regains the #1 UNIX Shipments Marketshare, but suffice to say that SPARC Solaris is making strong gains on the competition.  If you have seen the public roadmap through 2019 of Oracle's commitment to continue to deliver on this technology, you can see that Mark Hurd’s comment was not to be taken lightly.  We feel the systems tide turning in Oracle's direction and are working hard to show our partner community the value of being a part of the SPARC Solaris momentum.We are now planning for the Solaris 11.2 GA in late summer (11.2 beta is available now), as well as doing early preparations for Oracle OpenWorld 2014 on September 28th.  Stay tuned there!Here is a sampling of the coverage highlights around the Oracle Solaris 11.2 launch:“Solaris is still one of the most advanced platforms in the enterprise.” – ITBusinessEdge“Oracle is serious about clouds now, just as its customers are, whether they are building them in their own datacenters or planning to use public clouds.” – EnterpriseTech"Solaris is more about a layer of an integrated system than an operating system.” — ZDNet

    Read the article

  • Dissertation about website and database security - in need of some pointers

    - by ClarkeyBoy
    Hi, I am on my dissertation in my final year at university at the moment. One of the areas I need to research is security - for both websites and for databases. I currently have sections on the following: Website Form security - such as data validation. This section is more about preventing errors made by legitimate users as much as possible rather than stopping hackers, for example comparing a field to a regular expression and giving them meaningful feedback on any errors which did occur so as to stop it happening again. Constraints. For example if a value must be true or false then use a checkbox. If it is likely to be one of several values then use a dropdown or a set of radio boxes, and so on. If the value is unpredictable then use regular expressions to limit what characters they are allowed to enter, and to restrict the length of the string, and sometimes to limit the format (such as for dates / times, post codes and so on). Sometimes you can limit permissions to the form. This is on the occasion that you know exactly who (whether it be peoples names or a group of people - such as administrators or employees) is going to need access to the form. Restricting permissions will stop members of the public from being able to access the form. Symbols or strings which could be used maliciously or cause the website to act incorrectly (such as the script tag) should be filtered out or html encoded. Captcha images can be used to prevent automated systems from filling in and submitting the form. There are some hacks for file uploads - such as using double extensions - which can allow hackers to upload malicious files. Databases (this is nowhere near done yet but the sections I have planned are listed below) SQL statements vs stored procedures Throwing an error when one of the variables contains particular characters or groups of characters (I cant remember what characters they are, but I have seen a message thrown back at me before where I have tried to enter html or something into a text area). SQL Injection - and ways around it, with some examples. Does anyone have any hints and tips on where I could go for some decent, reliable information either about these areas or about other areas of security that I could cover? Thanks in advance. Regards, Richard PS I am a complete newbie when it comes to security, so please be patient with me. If any of the information I have put down is wrong or could be sub-sectioned then please feel free to say so.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • Multiple database with Spring+Hibernate+JPA

    - by ziftech
    Hi everybody! I'm trying to configure Spring+Hibernate+JPA for work with two databases (MySQL and MSSQL) my datasource-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <!-- Data Source config --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}" /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <!-- JPA config --> <tx:annotation-driven transaction-manager="transactionManager" /> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocations"> <list value-type="java.lang.String"> <value>classpath*:config/persistence.local.xml</value> <value>classpath*:config/persistence.remote.xml</value> </list> </property> <property name="dataSources"> <map> <entry key="localDataSource" value-ref="dataSource" /> <entry key="remoteDataSource" value-ref="dataSourceRemote" /> </map> </property> <property name="defaultDataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="localjpa"/> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans> each persistence.xml contains one unit, like this: <persistence-unit name="remote" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> PersistenceUnitManager cause following exception: Cannot resolve reference to bean 'persistenceUnitManager' while setting bean property 'persistenceUnitManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'persistenceUnitManager' defined in class path resource [config/datasource-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation': no matching editors or conversion strategy found If left only one persistence.xml without list, every works fine but I need 2 units... I also try to find alternative solution for work with two databases in Spring+Hibernate context, so I would appreciate any solution new error after changing to persistenceXmlLocations No single default persistence unit defined in {classpath:config/persistence.local.xml, classpath:config/persistence.remote.xml} UPDATE: I add persistenceUnitName, it works, but only with one unit, still need help UPDATE: thanks, ChssPly76 I changed config files: datasource-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}"> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"> <property name="defaultPersistenceUnitName" value="pu1" /> </bean> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocation" value="${persistence.xml.location}" /> <property name="defaultDataSource" ref="dataSource" /> <!-- problem --> <property name="dataSources"> <map> <entry key="local" value-ref="dataSource" /> <entry key="remote" value-ref="dataSourceRemote" /> </map> </property> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu1" /> <property name="dataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactoryRemote" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu2" /> <property name="dataSource" ref="dataSourceRemote" /> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <bean id="transactionManagerRemote" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactoryRemote" /> </beans> persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="pu1" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${local.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${local.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> <persistence-unit name="pu2" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> </persistence> Now it builds two entityManagerFactory, but both are for Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu1 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu2 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server (but must MySQL) I suggest, that use only dataSource, dataSourceRemote (no substitution) is not worked. That's my last problem

    Read the article

  • Database design for summarized data

    - by holden
    I have a new table I'm going to add to a bunch of other summarized data, basically to take some of the load off by calculating weekly avgs. My question is whether I would be better off with one model over the other. One model with days of the week as a column with an additional column for price or another model as a series of fields for the DOW each taking a price. I'd like to know which would save me in speed and/or headaches? Or at least the trade off. IE. ID OBJECT_ID MON TUE WED THU FRI SAT SUN SOURCE OR ID OBJECT_ID DAYOFWEEK PRICE SOURCE

    Read the article

  • Cross-platform embedded database/key-value store for C#

    - by Arne Claassen
    I'm looking for a fast, embeddable key/value store with cursor semantics over key collections (or a simple embeddable DB) that I can use in .NET and mono. Need it to be open-source, would prefer an MIT or Apache style license over a GPL license. Not opposed to a library that needs bindings to be written, as long as binaries are available for both windows and linux. Options considered: SQLite - has bindings and native implementation, but single-threaded and not all that fast Embedded InnoDB - no .NET bindings i can find and it's GPLv2 Berkley DB - no .NET bindings i can find Tokyo Cabinet - no .NET bindings i can find and problematic to build on windows MadCow Memory-mapped data structures - GPLv2 Is there an option better than the above that i'm missing, or bindings for the above i don't know about?

    Read the article

  • Calling oracle stored procedure from Excel - VBA

    - by Ram
    I have to execute an Oracle stored procedure from vba (Excel) with around 38 input parameters. The stored procedure will insert some values in the destination table once that is executed. When it is executed through VBA the number of fields which is inserted is less than when it is executed directly from the backend (oracle). For example it is creating around 17 fields of records while executing directly from the back end. (I have created a wrapper class in the back-end and passing the same parameter values in the back-end.). It is creating around 15 fields of records while executing from the excel VBA in the destination table. Kindly let me know what could be the possible reasons for this.

    Read the article

  • Finding first alphabetic character in a DB2 database field

    - by Paul Alan Taylor
    I'm doing a bit of work which requires me to truncate DB2 character-based fields. Essentially, I need to discard all text which is found at or after the first alphabetic character. e.g. 102048994BLAHBLAHBLAH becomes:- 102048994 In SQL Server, this would be a doddle - PATINDEX would swoop in and save the day. Much celebration would ensue. My problem is that I need to do this in DB2. Worse, the result needs to be used in a join query, also in DB2. I can't find an easy way to do this. Is there a PATINDEX equivalent in DB2? Is there another way to solve this problem? If need be, I'll hardcode 26 chained LOCATE functions to get my result, but if there is a better way, I am all ears.

    Read the article

  • Labeling connections using Oracle Universal Connection Pool (UCP) doesn't seem to work

    - by sapporo
    I'm evaluating the Oracle Universal Connection Pool (UCP) included with their 11.2 JDBC driver, but cannot get the labeling functionality to work properly (which I'd like to use to minimize the amount of ALTER SESSION statements). I'm basically following their sample code to implement the ConnectionLabelingCallback interface. The callback method cost(Properties reqLabels, Properties currentLabels) is called as expected, so the callback seems to be configured correctly. However, the other callback method configure(Properties reqLabels, Object conn) is passed arguments that make it impossible to provide a meaningful implementation: reqLabels is always null, so the callback method has no way of knowing what labels need to be set. conn is of type oracle.jdbc.driver.T4CConnection, which cannot be cast to LabelableConnection, so the callback method has no way of inspecting or manipulating the labels attached to the connection. My code is running on Java 6 and using ojdbc6.jar and ucp.jar version 11.2.0.1.0. The database is actually 10.2, but that should not be a problem according to the docs, since the UCP functionality is completely implemented in the JDBC driver. Thanks in advance!

    Read the article

  • general database modeling and django specific modeling

    - by Shreko
    I'm wondering what is the best way to model something like the following. Lets say my company sells metal bars (parameters/fields are: length, profile_type, quantity etc.) of different profiles, where profiles may be pipe(pipe_diameter, wall_thickness) or hollow_rectangle(base, height, wall_thickness), or maybe some other profile with different parameters. Lets say maximum number of profiles would be 12, each profile having between 2-5 parameters. Should everything be in a single table like table_bars: id, length, quantity, profile_type, pipe_diameter, wall_thickness, base, height, etc.) where profile type would be (pipe, rectangle etc.) or should every shape have its own table with its own parameters and in table_bars keep only id, length, quantity profile_type and profile_id) and are there any django specific issues is multiple tables are the best answer? Thanks

    Read the article

  • Tutorials for .NET database app using SQLite

    - by ChrisC
    I have some MS Access experience, and had a class on console c++ apps, now I am trying to develop my first program. It's a little C# db app. I have the db tables and columns planned and keyed into VS, but that's where I'm stuck. I'm needing C#/VS tutorials that will guide me on configuring relationships, datatyping, etc, on the db so I can get it ready for testing of the schema. The only tutorials I've been able to find either talk about general db basics (ie, not helping me with VS/C#), or about C# communications with an existing SQL db. Thank you. (In case it matters, I'm using the open source System.Data.SQLite (sqlite.phxsoftware.com) for the db. I chose it over SQL Server CE after seeing a comparison between the two. Also I wanted a server-less version of SQL because this little app will be on other people's computers and I want to to do as little support as possible.)

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >