Search Results

Search found 69128 results on 2766 pages for 'oracle data integrator'.

Page 316/2766 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Working with the ADF DVT Map Component

    - by Shay Shmeltzer
    The map component provided by the ADF Faces DVT set of components is one that we are always ending up using in key demos - simply because it is so nice looking, but also because it is quite simple to use. So in case you need to show some geographical data, or if you just want to impress your manager, here is a little video that shows you how to create two types of maps. The first one is a color themed map - where you show different states with different colors based on the value of some data point there. The other is a point theme - basically showing specific locations on the map. For both cases I'm using the Oracle provided mapviewer instance at http://elocation.oracle.com/mapviewer. You can find more information about using the map component in the Web User Interface Developer's Guide here and in the tag doc and components demo. For the first map the query I'm using (on the HR demo schema in the Oracle DB) is: SELECT     COUNT(EMPLOYEES.EMPLOYEE_ID) , Department_name , STATE_PROVINCE FROM     EMPLOYEES,     DEPARTMENTS,     LOCATIONS WHERE employees.department_id=departments.department_idand Departments.location_id=locations.location_idGROUP BY Department_name,    LOCATIONS.STATE_PROVINCE

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • Announcing Hackathon for Social Developers

    - by Mike Stiles
    Continuing our Social Developer theme, we're excited to announce a week long hackathon put on by the Oracle Social Developer Lab (OSDL). The event starts at JavaOne Oct 2nd and runs through Oct 9th. A winner will be announced and profiled in the following issue of Java Magazine. What's it about?The OSDL is on a mission to make social development easier for the Java community. You may have noticed the biggest social networks have created tools for Ruby, PHP, and other languages, but not as much for Java. We've decided to help fill the gap with a SocialLink social publishing library. You can learn more about it on Java.net. We're also interested in promoting other tools that facilitate social development such as DaliCore Framework.  For our hack, you've got one week to leverage our library and/or DailCore to create a social app. The only rules are it must be a new application, and it must leverage one or both of these tools.  How to submit Create a project that uses either the SocialLink library or the DaliCore Framework to read or publish social data. 1. Upload your hack to a new project on java.net 2. Submit the URL to your java.net project through the project submission form on the Oracle Social Developer Community Facebook page. Only projects that have been submitted to the Oracle Social Developer Community will be reviewed.  In addition to the review process, we'll be adding some projects to the SocialLink project as a "sibling" project. Should you participate?If you're a developer who aspires to integrate some social functionality into your Java application, then yes!  How else can I participate with OSDL?If you're not ready to participate in the hackathon but have ideas for how we can make social development easier for the Java community, come join our social developer community on Facebook. 

    Read the article

  • JavaOne Content Available for Free

    - by Tori Wieldt
    JavaOne content is available in video in three sizes, depending on if you want to have a sip, have a drink, or go to the proverbial firehose. Tall (Keynote Highlights) Go to the JavaOne playlist on the YouTube Java channel for highlights of the JavaOne Keynotes.  Grande (Keynotes in Full) Go to the Oracle Media Network JavaOne 2012 channel to view the keynotes in full (Community Keynote coming soon). Venti (All Sessions, BOFs and Tutorials) To view slides paired with audio of each session, go to the JavaOne content catalog (JavaOne homepage, click on JavaOne Technical Sessions) and select a session. If a video is available, you'll see "Media" in the right column. Look under "Presentation Download" to get the slides. Sessions are being made available as quickly as possible. "It's exciting to see Oracle take community stewardship so seriously," said Sharat Chander, Group Director for Java Technology Outreach. "Making all JavaOne sessions on video available online for free will helps make the future Java for everyone." Thanks to Oracle for funding this and providing to it to the Java Community for free.

    Read the article

  • Exadata???DiskGroup

    - by Liu Maclean(???)
    Exadata???Asm Diskgroup ???????: 1.??dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk ????active?griddisk [root@dm01db01 ~]# dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk dm01cel01: DATA_DM01_CD_00_dm01cel01 active dm01cel01: DATA_DM01_CD_01_dm01cel01 active dm01cel01: DATA_DM01_CD_02_dm01cel01 active dm01cel01: DATA_DM01_CD_03_dm01cel01 active dm01cel01: DATA_DM01_CD_04_dm01cel01 active dm01cel01: DATA_DM01_CD_05_dm01cel01 active dm01cel01: DATA_DM01_CD_06_dm01cel01 active dm01cel01: DATA_DM01_CD_07_dm01cel01 active dm01cel01: DATA_DM01_CD_08_dm01cel01 active dm01cel01: DATA_DM01_CD_09_dm01cel01 active dm01cel01: DATA_DM01_CD_10_dm01cel01 active dm01cel01: DATA_DM01_CD_11_dm01cel01 active dm01cel01: DBFS_DG_CD_02_dm01cel01 active dm01cel01: DBFS_DG_CD_03_dm01cel01 active dm01cel01: DBFS_DG_CD_04_dm01cel01 active dm01cel01: DBFS_DG_CD_05_dm01cel01 active dm01cel01: DBFS_DG_CD_06_dm01cel01 active dm01cel01: DBFS_DG_CD_07_dm01cel01 active dm01cel01: DBFS_DG_CD_08_dm01cel01 active dm01cel01: DBFS_DG_CD_09_dm01cel01 active dm01cel01: DBFS_DG_CD_10_dm01cel01 active dm01cel01: DBFS_DG_CD_11_dm01cel01 active dm01cel01: RECO_DM01_CD_00_dm01cel01 active dm01cel01: RECO_DM01_CD_01_dm01cel01 active dm01cel01: RECO_DM01_CD_02_dm01cel01 active dm01cel01: RECO_DM01_CD_03_dm01cel01 active dm01cel01: RECO_DM01_CD_04_dm01cel01 active dm01cel01: RECO_DM01_CD_05_dm01cel01 active dm01cel01: RECO_DM01_CD_06_dm01cel01 active dm01cel01: RECO_DM01_CD_07_dm01cel01 active dm01cel01: RECO_DM01_CD_08_dm01cel01 active dm01cel01: RECO_DM01_CD_09_dm01cel01 active dm01cel01: RECO_DM01_CD_10_dm01cel01 active dm01cel01: RECO_DM01_CD_11_dm01cel01 active dm01cel02: DATA_DM01_CD_00_dm01cel02 active dm01cel02: DATA_DM01_CD_01_dm01cel02 active dm01cel02: DATA_DM01_CD_02_dm01cel02 active dm01cel02: DATA_DM01_CD_03_dm01cel02 active dm01cel02: DATA_DM01_CD_04_dm01cel02 active dm01cel02: DATA_DM01_CD_05_dm01cel02 active dm01cel02: DATA_DM01_CD_06_dm01cel02 active dm01cel02: DATA_DM01_CD_07_dm01cel02 active dm01cel02: DATA_DM01_CD_08_dm01cel02 active dm01cel02: DATA_DM01_CD_09_dm01cel02 active dm01cel02: DATA_DM01_CD_10_dm01cel02 active dm01cel02: DATA_DM01_CD_11_dm01cel02 active dm01cel02: DBFS_DG_CD_02_dm01cel02 active dm01cel02: DBFS_DG_CD_03_dm01cel02 active dm01cel02: DBFS_DG_CD_04_dm01cel02 active dm01cel02: DBFS_DG_CD_05_dm01cel02 active dm01cel02: DBFS_DG_CD_06_dm01cel02 active dm01cel02: DBFS_DG_CD_07_dm01cel02 active dm01cel02: DBFS_DG_CD_08_dm01cel02 active dm01cel02: DBFS_DG_CD_09_dm01cel02 active dm01cel02: DBFS_DG_CD_10_dm01cel02 active dm01cel02: DBFS_DG_CD_11_dm01cel02 active dm01cel02: RECO_DM01_CD_00_dm01cel02 active dm01cel02: RECO_DM01_CD_01_dm01cel02 active dm01cel02: RECO_DM01_CD_02_dm01cel02 active dm01cel02: RECO_DM01_CD_03_dm01cel02 active dm01cel02: RECO_DM01_CD_04_dm01cel02 active dm01cel02: RECO_DM01_CD_05_dm01cel02 active dm01cel02: RECO_DM01_CD_06_dm01cel02 active dm01cel02: RECO_DM01_CD_07_dm01cel02 active dm01cel02: RECO_DM01_CD_08_dm01cel02 active dm01cel02: RECO_DM01_CD_09_dm01cel02 active dm01cel02: RECO_DM01_CD_10_dm01cel02 active dm01cel02: RECO_DM01_CD_11_dm01cel02 active dm01cel03: DATA_DM01_CD_00_dm01cel03 active dm01cel03: DATA_DM01_CD_01_dm01cel03 active dm01cel03: DATA_DM01_CD_02_dm01cel03 active dm01cel03: DATA_DM01_CD_03_dm01cel03 active dm01cel03: DATA_DM01_CD_04_dm01cel03 active dm01cel03: DATA_DM01_CD_05_dm01cel03 active dm01cel03: DATA_DM01_CD_06_dm01cel03 active dm01cel03: DATA_DM01_CD_07_dm01cel03 active dm01cel03: DATA_DM01_CD_08_dm01cel03 active dm01cel03: DATA_DM01_CD_09_dm01cel03 active dm01cel03: DATA_DM01_CD_10_dm01cel03 active dm01cel03: DATA_DM01_CD_11_dm01cel03 active dm01cel03: DBFS_DG_CD_02_dm01cel03 active dm01cel03: DBFS_DG_CD_03_dm01cel03 active dm01cel03: DBFS_DG_CD_04_dm01cel03 active dm01cel03: DBFS_DG_CD_05_dm01cel03 active dm01cel03: DBFS_DG_CD_06_dm01cel03 active dm01cel03: DBFS_DG_CD_07_dm01cel03 active dm01cel03: DBFS_DG_CD_08_dm01cel03 active dm01cel03: DBFS_DG_CD_09_dm01cel03 active dm01cel03: DBFS_DG_CD_10_dm01cel03 active dm01cel03: DBFS_DG_CD_11_dm01cel03 active dm01cel03: RECO_DM01_CD_00_dm01cel03 active dm01cel03: RECO_DM01_CD_01_dm01cel03 active dm01cel03: RECO_DM01_CD_02_dm01cel03 active dm01cel03: RECO_DM01_CD_03_dm01cel03 active dm01cel03: RECO_DM01_CD_04_dm01cel03 active dm01cel03: RECO_DM01_CD_05_dm01cel03 active dm01cel03: RECO_DM01_CD_06_dm01cel03 active dm01cel03: RECO_DM01_CD_07_dm01cel03 active dm01cel03: RECO_DM01_CD_08_dm01cel03 active dm01cel03: RECO_DM01_CD_09_dm01cel03 active dm01cel03: RECO_DM01_CD_10_dm01cel03 active dm01cel03: RECO_DM01_CD_11_dm01cel03 active ??????????griddisk, ?????’cellcli -e drop griddisk’ ?’cellcli -e create griddisk’????griddisk ,??????drop DBFS_DG???griddisk 2.??ASM???create disk group ?????CELL?IP,????????????? [root@dm01db02 ~]# cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.64.131" cell="192.168.64.132" cell="192.168.64.133" SQL> create diskgroup DATA_MAC normal redundancy 2 DISK 3 'o/192.168.64.131/RECO_DM01_CD_*_dm01cel01' 4 ,'o/192.168.64.132/RECO_DM01_CD_*_dm01cel02' 5 ,'o/192.168.64.133/RECO_DM01_CD_*_dm01cel03' 6 attribute 7 'AU_SIZE'='4M', 8 'CELL.SMART_SCAN_CAPABLE'='TRUE', 9 'compatible.rdbms'='11.2.0.2', 10 'compatible.asm'='11.2.0.2' 11 / 3. MOUNT ???DISKGROUP ALTER DISKGROUP DATA_MAC mount ; 4.???crsctl start/stop resource ora.DATA_MAC.dg ?????

    Read the article

  • Fusion Applications: Get Trained to Implement

    - by mseika
    0 0 1 214 1349 Technische Universität München 11 3 1560 14.0 96 800x600 Normal 0 21 false false false DE JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","serif";} Becoming an Oracle Fusion Applications Implementation Expert gives you the opportunity to gain differentiation and a competitive advantage. Oracle University can help you to get trained and then specialized. View the current course schedule for Romania and Turkey. You can choose a delivery format to suit your learning style and situation: Classroom Training in Bucharest, Romania or Istanbul, Turkey or the travel-free options: Live Virtual Class or Training On Demand. SPECIAL OFFER Standard prices for all Oracle Fusion Applications Training On Demand courses currently have a reduced price. This offer is valid until 31 August 2013. Your OPN discount will be then applied to this reduced price to make this learning form even more attractive.

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Primavera Portfolio Management 9 SpecializationNew Specialist Guided Learning Paths Available! · Primavera Portfolio Management 9 Sales Specialist · Primavera Portfolio Management 9 PreSales Specialist · Primavera Portfolio Management 9 Implementation Specialist · Primavera Portfolio Management Support Specialist New Specialist Guided Learning Paths Available! · Primavera Portfolio Management 9 Sales Specialist Assessment · Primavera Portfolio Management 9 PreSales Specialist Assessment · Primavera Portfolio Management Support Specialist New OPN Boot Camp Available! · Primavera Portfolio Management Implementation Boot CampThis boot camp is designed to introduce users to the powerful features of Primavera Portfolio Management in tandem with building and configuring solutions appropriate to client needs to add value and solve business pains associated with portfolio management such as: System Set-up and Configuration; Administration processes; Create and manage categories, value lists, functions, scorecards, portfolios, investor maps, tables, forms, graphs, dashboards and workflows. The principal objective is ensuring attendees are able to design and configure enterprise Portfolio Management solutions.Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

  • eSTEP TechCast - November 2012

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 1st of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 01. November 2012, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title:  Update on Solaris 11 Abstract:Oracle announced an update on Solaris 11 on Oracle OpenWorld. Join us in this call to get a brief summary of the enhancements and how they help to build better solutions for your customers. Target audience: Tech Presales Speaker: Jörg Möllenkamp Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 775 167Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Fusion Applications: Get Trained to Implement

    - by mseika
    0 0 1 165 1041 Technische Universität München 8 2 1204 14.0 96 800x600 Normal 0 21 false false false DE JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","serif";} Becoming an Oracle Fusion Applications Implementation Expert gives you the opportunity to gain differentiation and a competitive advantage. Oracle University can help you to get trained and then specialized. View the current course schedule for France. You can choose a delivery format to suit your learning style and situation: Classroom Training in Paris Colombes or the travel-free options: Live Virtual Class or Training On Demand. SPECIAL OFFER Standard prices for all Oracle Fusion Applications Training On Demand courses currently have a reduced price. This offer is valid until 31 August 2013. Your OPN discount will be then applied to this reduced price to make this learning form even more attractive.

    Read the article

  • ??OPEN CURSOR?BULK COLLECT

    - by Liu Maclean(???)
    ????T.askmaclean.com?????bulk collect?open cursor???, ?????????  ??????: ???? OPEN_CURSOR ????SQL?? ???????. ?????? ????? ???????????????? ????? test_soruce create table zengfankun_temp01 as select * from dba_objects;select count(*) from zengfankun_temp01;–12,6826analyze table zengfankun_temp01 compute statistics; create or replace procedure test_open_cursor istype type_owner is table of zengfankun_temp01.owner%type index by binary_integer;type type_object_name is table of zengfankun_temp01.object_name%type index by binary_integer;type type_object_id is table of zengfankun_temp01.object_id%type index by binary_integer;type type_object_type is table of zengfankun_temp01.object_type%type index by binary_integer;type type_last_ddl_time is table of zengfankun_temp01.last_ddl_time%type index by binary_integer; l_ary_owner type_owner;l_ary_object_name type_object_name;l_ary_object_id type_object_id;l_ary_object_type type_object_type;l_ary_last_ddl_time type_last_ddl_time; cursor cur_object isselect owner,object_name,object_id,object_type,last_ddl_timefrom zengfankun_temp01order by owner,object_name,object_type,last_ddl_time;OPEN_START number;OPEN_END number;FETCH_START number;FETCH_END number;beginDBMS_OUTPUT.ENABLE (buffer_size=>null) ;OPEN_START:=dbms_utility.get_time();open cur_object;OPEN_END :=dbms_utility.get_time();dbms_output.put_line(‘OPEN_TIME:’||TO_CHAR(OPEN_END-OPEN_START));loopFETCH_START:=dbms_utility.get_time();fetch cur_object bulk collect intol_ary_owner,l_ary_object_name,l_ary_object_id,l_ary_object_type,l_ary_last_ddl_timelimit 10000;FETCH_END:=dbms_utility.get_time();dbms_output.put_line(‘FETCH_TIME:’||TO_CHAR(FETCH_END-FETCH_START)||’ ROWCOUNT:’||cur_object%rowCount); exit when cur_object%notfound or cur_object%notfound is null;end loop;end test_open_cursor; OPEN_TIME:12FETCH_TIME:21 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:1 ROWCOUNT:126826 ???? OPEN_TIME:0FETCH_TIME:18 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:2 ROWCOUNT:126826 SQL?????????, ????????????.??OPEN CURSOR ????0???????????3??.??N? ??????. ???? ?N? ?????????? ??????. ??????????????? ??????????. ?????????10000??? ???????????????????clear???, ???????????: ?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? ????FETCH ??????????????,???????Current Block, The most recent version of block , ?????SCN >> Snapshot scn, ????UNDO???? ???SCN ???Best Block ,???Read Consistentcy;???? ???UNDO SNAPSHOT???????????????Best Block??,???????ORA-1555??? ????????, ??????????,???????????????char(2000)????, ???????????????,????bulk collect fetch??fetch 10 ???,????????OPEN CURSOR?????PARSE??SQL????????, ??????????fetch bulk collect??????????10????,??”_trace_pin_time”????Server Process?pin CR block???,??????????Fetch Bulk Collect limit 10??10?buffer?pin? [oracle@nas ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Aug 1 11:36:52 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- http://www.askmaclean.com SQL> create table maclean (t1 char(2000)) tablespace users pctfree 99; Table created. SQL> begin 2 for i in 1..200 loop 3 insert into maclean values('MACLEAN'); 4 commit ; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> select count(*) from maclean; COUNT(*) ---------- 200 SQL> select blocks,num_rows from dba_tables where table_name='MACLEAN'; BLOCKS NUM_ROWS ---------- ---------- 244 200 SQL> alter system set "_trace_pin_time"=1 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> alter session set events '10046 trace name context forever,level 12'; Session altered. SQL> SQL> SQL> declare 2 cursor v_cursor is 3 select * from sys.maclean; 4 type v_type is table of sys.maclean%rowtype index by binary_integer; 5 rec_tab v_type; 6 begin 7 open v_cursor; 8 dbms_lock.sleep(30); 9 loop 10 fetch v_cursor bulk collect 11 into rec_tab limit 10; 12 dbms_lock.sleep(10); 13 exit when v_cursor%notfound; 14 end loop; 15 end; 16 / ?????10046 trace+ pin trace: PARSING IN CURSOR #47499559136872 len=337 dep=0 uid=0 oct=47 lid=0 tim=1343836146412056 hv=496860239 ad='11a11dbb0' sqlid='4zh7954ftuz2g' declare cursor v_cursor is select * from sys.maclean; type v_type is table of sys.maclean%rowtype index by binary_integer; rec_tab v_type; begin open v_cursor; dbms_lock.sleep(30); loop fetch v_cursor bulk collect into rec_tab limit 10; dbms_lock.sleep(10); exit when v_cursor%notfound; end loop; end; END OF STMT PARSE #47499559136872:c=0,e=346,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1343836146412051 ===================== PARSING IN CURSOR #47499559126280 len=25 dep=1 uid=0 oct=3 lid=0 tim=1343836146414939 hv=3296884535 ad='11a11d250' sqlid='2mb1493284xtr' SELECT * FROM SYS.MACLEAN END OF STMT PARSE #47499559126280:c=1999,e=2427,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=2568761675,tim=1343836146414937 EXEC #47499559126280:c=0,e=55,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2568761675,tim=1343836146415104 ????? ? SELECT * FROM SYS.MACLEAN? PARSE ????? , ????FETCH???????pin ????????, ????OPEN CURSOR????? *** 2012-08-01 11:49:36.424 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 30009361 duration=0 p2=0 p3=0 obj#=-1 tim=1343836176424782 ???30s pin ktewh26: kteinpscan dba 0x10a6202:4 time 1039048805 pin ktewh27: kteinmap dba 0x10a6202:4 time 1039048847 pin kdswh11: kdst_fetch dba 0x10a6203:1 time 1039048898 pin kdswh11: kdst_fetch dba 0x10a6204:1 time 1039048961 pin kdswh11: kdst_fetch dba 0x10a6205:1 time 1039049004 pin kdswh11: kdst_fetch dba 0x10a6206:1 time 1039049042 pin kdswh11: kdst_fetch dba 0x10a6207:1 time 1039049089 pin kdswh11: kdst_fetch dba 0x10a6208:1 time 1039049123 pin kdswh11: kdst_fetch dba 0x10a6209:1 time 1039049159 pin kdswh11: kdst_fetch dba 0x10a620a:1 time 1039049191 pin kdswh11: kdst_fetch dba 0x10a620b:1 time 1039049225 pin kdswh11: kdst_fetch dba 0x10a620c:1 time 1039049260 kdst_fetch???fetch??????? , ??fetch?10?? ???????FETCH FETCH #47499559126280:c=0,e=536,p=0,cr=12,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836176425542 *** 2012-08-01 11:49:46.428 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002694 duration=0 p2=0 p3=0 obj#=-1 tim=134383618642829 ????10s pin kdswh11: kdst_fetch dba 0x10a620d:1 time 1049052211 pin kdswh11: kdst_fetch dba 0x10a620e:1 time 1049052264 pin kdswh11: kdst_fetch dba 0x10a620f:1 time 1049052299 pin kdswh11: kdst_fetch dba 0x10a6211:1 time 1049052332 pin kdswh11: kdst_fetch dba 0x10a6212:1 time 1049052364 pin kdswh11: kdst_fetch dba 0x10a6213:1 time 1049052398 pin kdswh11: kdst_fetch dba 0x10a6214:1 time 1049052430 pin kdswh11: kdst_fetch dba 0x10a6215:1 time 1049052462 pin kdswh11: kdst_fetch dba 0x10a6216:1 time 1049052494 pin kdswh11: kdst_fetch dba 0x10a6217:1 time 1049052525 FETCH #47499559126280:c=0,e=371,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836186428807 ??pin 10????, ???fetch ?? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002864 duration=0 p2=0 p3=0 obj#=-1 tim=1343836196431754 pin kdswh11: kdst_fetch dba 0x10a6218:1 time 1059055662 pin kdswh11: kdst_fetch dba 0x10a6219:1 time 1059055714 pin kdswh11: kdst_fetch dba 0x10a621a:1 time 1059055748 pin kdswh11: kdst_fetch dba 0x10a621b:1 time 1059055781 pin kdswh11: kdst_fetch dba 0x10a621c:1 time 1059055815 pin kdswh11: kdst_fetch dba 0x10a621d:1 time 1059055848 pin kdswh11: kdst_fetch dba 0x10a621e:1 time 1059055883 pin kdswh11: kdst_fetch dba 0x10a621f:1 time 1059055915 pin kdswh11: kdst_fetch dba 0x10a6221:1 time 1059055953 pin kdswh11: kdst_fetch dba 0x10a6222:1 time 1059055992 FETCH #47499559126280:c=0,e=385,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836196432274 ???? ??????? DBA????? ............................ ???? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002933 duration=0 p2=0 p3=0 obj#=-1 tim=1343836366495589 pin kdswh11: kdst_fetch dba 0x10a62f6:1 time 1229119497 pin kdswh11: kdst_fetch dba 0x10a62f7:1 time 1229119545 pin kdswh11: kdst_fetch dba 0x10a62f8:1 time 1229119576 pin kdswh11: kdst_fetch dba 0x10a62f9:1 time 1229119610 pin kdswh11: kdst_fetch dba 0x10a62fa:1 time 1229119644 pin kdswh11: kdst_fetch dba 0x10a62fb:1 time 1229119671 pin kdswh11: kdst_fetch dba 0x10a62fc:1 time 1229119703 pin kdswh11: kdst_fetch dba 0x10a62fd:1 time 1229119730 pin kdswh11: kdst_fetch dba 0x10a62fe:1 time 1229119760 pin kdswh11: kdst_fetch dba 0x10a62ff:1 time 1229119787 FETCH #47499559126280:c=0,e=340,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836366496067 ??????DBA? 0x10a6203 , ??DBA ? 0x10a62ff ???????DBA??MACLEAN????????,???DBA???Maclean????? getbfno?????dba??????????? CREATE OR REPLACE FUNCTION getbfno (p_dba IN VARCHAR2) RETURN VARCHAR2 IS l_str VARCHAR2 (255) DEFAULT NULL; l_fno VARCHAR2 (15); l_bno VARCHAR2 (15); BEGIN l_fno := DBMS_UTILITY.data_block_address_file (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_bno := DBMS_UTILITY.data_block_address_block (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_str := 'datafile# is:' || l_fno || CHR (10) || 'datablock is:' || l_bno || CHR (10) || 'dump command:alter system dump datafile ' || l_fno || ' block ' || l_bno || ';'; RETURN l_str; END; / Function created. SQL> select getbfno('0x10a6203') from dual; GETBFNO('0X10A6203') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680451 dump command:alter system dump datafile 4 block 680451; SQL> select getbfno('0x10a62ff') from dual; GETBFNO('0X10A62FF') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680703 dump command:alter system dump datafile 4 block 680703; SQL> select dbms_rowid.rowid_block_number(min(rowid)),dbms_rowid.rowid_relative_fno(min(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MIN(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MIN(ROWID)) ----------------------------------------- 680451 4 SQL> select dbms_rowid.rowid_block_number(max(rowid)),dbms_rowid.rowid_relative_fno(max(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MAX(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MAX(ROWID)) ----------------------------------------- 680703 4 ???????3???: 1.?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? 2.????FETCH ?????????????? 3. ???open cursor+ fetch bulk collect???”?????????”

    Read the article

  • MS Access 2003: Can data disappear from records and how do I test for this and prevent it?

    - by user328960
    Problem and about the database: Data from a record in Access 2003 database has disappeared. This database has 1 backend and 3 frontends, multiple users and is hosted on Citrix. Within this database, we have records of all clients served, ranging in the 1000s. Background info: The form for client data entry is set up with various subforms, including both a "programs enrolled" subform and a "services" subform. A client can be enrolled in multiple programs. Once enrolled in a program, services can be entered for that program area using the services subform. There are multiple fields in the services subform, one of which is a drop-down field allowing you to choose from the programs a client has been enrolled in (the list is updated for that client whenever he is enrolled in a new program). The problem details: For one specific record and one specific program area, the program has disappeared from the "programs enrolled" subform and all of the related services have disappeared from the "services" subform for a period of 3 months of data entry. However, other programs and services for this record did not disappear. Questions: Is the disappearance of data a common Access 2003 problem? Are there tests in place that can be run to see if data is disappearing and catch that data? If so, what are they? If there is specific code involved, what is it? What can be done to prevent the disappearing of data (other than using a different database)?

    Read the article

  • Integration test failing through NUnit Gui/Console, but passes through TestDriven in IDE

    - by Cliff
    I am using NHibernate against an Oracle database with the NHibernate.Driver.OracleDataClientDriver driver class. I have an integration test that pulls back expected data properly when executed through the IDE using TestDriven.net. However, when I run the unit test through the NUnit GUI or Console, NHibernate throws an exception saying it cannot find the Oracle.DataAccess assembly. Obviously, this prevents me from running my integration tests as part of my CI process. NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly Oracle.DataAccess could not be found. Ensure that the assembly Oracle.DataAccess is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly.* I have tried making the assembly available in two ways, by copying it into the bin\debug folder and by adding the element in the config file. Again, both methods work when executing through TestDriven in the IDE. Neither work when executing through NUnit GUI/Console. The NUnit Gui log displays the following message. 21:42:26,377 ERROR [TestRunnerThread] ReflectHelper [(null)]- Could not load type Oracle.DataAccess.Client.OracleConnection, Oracle.DataAccess. System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=2.111.7.20, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess, Version=2.111.7.20, Culture=neutral, PublicKeyToken=89b483f429c47342' --- System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess' I am running NUnit 2.4.8, TestDriven.net 2.24 and VS2008sp1 on Windows 7 64bit. Oracle Data Provider v2.111.7.20, NHibernate v2.1.0.4. Has anyone run into this issue, better yet, fixed it?

    Read the article

  • How to bulk insert data from ref cursor to a temporary table in PL/SQL

    - by Sambath
    Could anyone tell me how to bulk insert data from a ref cursor to a temporary table in PL/SQL? I have a procedure that one of its parameters stores a result set, this result set will be inserted to a temporary table in another stored procedure. This is my sample code. CREATE OR REPLACE PROCEDURE get_account_list ( type_id in account_type.account_type_id%type, acc_list out sys_refcursor ) is begin open acc_list for select account_id, account_name, balance from account where account_type_id = type_id; end get_account_list; CREATE OR REPLACE PROCEDURE proc1 ( ... ) is accounts sys_refcursor; begin get_account_list(1, accounts); --How to bulk insert data in accounts to a temporary table? end proc1; In SQL Server, I can write as code below CREATE PROCEDURE get_account_list type_id int as select account_id, account_name, balance from account where account_type_id = type_id; CREATE PROCEDURE proc1 ( ... ) as ... insert into #tmp_data(account_id, account_name, balance) exec get_account_list 1 How can I write similar to the code in SQL Server? Thanks.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • asp code for upload data

    - by vicky
    hello everyone i have this code for uploading an excel file and save the data into database.I m not able to write the code for database entry. someone please help <% if (Request("FileName") <> "") Then Dim objUpload, lngLoop Response.Write(server.MapPath(".")) If Request.TotalBytes > 0 Then Set objUpload = New vbsUpload For lngLoop = 0 to objUpload.Files.Count - 1 'If accessing this page annonymously, 'the internet guest account must have 'write permission to the path below. objUpload.Files.Item(lngLoop).Save "D:\PrismUpdated\prism_latest\Prism\uploadxl\" Response.Write "File Uploaded" Next Dim FSYSObj, folderObj, process_folder process_folder = server.MapPath(".") & "\uploadxl" set FSYSObj = server.CreateObject("Scripting.FileSystemObject") set folderObj = FSYSObj.GetFolder(process_folder) set filCollection = folderObj.Files Dim SQLStr SQLStr = "INSERT ALL INTO TABLENAME " for each file in filCollection file_name = file.name path = folderObj & "\" & file_name Set objExcel_chk = CreateObject("Excel.Application") Set ws1 = objExcel_chk.Workbooks.Open(path).Sheets(1) row_cnt = 1 'for row_cnt = 6 to 7 ' if ws1.Cells(row_cnt,col_cnt).Value <> "" then ' col = col_cnt ' end if 'next While (ws1.Cells(row_cnt, 1).Value <> "") for col_cnt = 1 to 10 SQLStr = SQLStr & "VALUES('" & ws1.Cells(row_cnt, 1).Value & "')" next row_cnt = row_cnt + 1 WEnd 'objExcel_chk.Quit objExcel_chk.Workbooks.Close() set ws1 = nothing objExcel_chk.Quit Response.Write(SQLStr) 'set filobj = FSYSObj.GetFile (sub_fol_path & "\" & file_name) 'filobj.Delete next End if End If plz tell me how to save the following excel data to the oracle databse.any help would be appreciated

    Read the article

  • Return REF CURSOR to procedure generated data

    - by ThaDon
    I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this: FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2 The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row. By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • PHP : If...Else...Query

    - by Rachel
    I am executing this statement under while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) Now what I want in else loop is to throw exception only for data value which did not satisfy the if condition. Right now am displaying the complete row as I am not sure how to throw exception only for data which does not satisfy the value. Code if ((strtotime($data[11]) &&strtotime($data[12])&&strtotime($data[16]))!==FALSE && ctype_digit($data[0]) && ctype_alnum($data[1]) && ctype_digit($data[2]) && ctype_alnum($data[3]) && ctype_alnum($data[4]) && ctype_alnum($data[5]) && ctype_alnum($data[6]) && ctype_alnum($data[7]) && ctype_alnum($data[8]) && $this->_is_valid($data[9]) && ctype_digit($data[10]) && ctype_digit($data[13]) && $this->_is_valid($data[14])) { //Some Logic } else { throw new Exception ("Data {$data[0], $data[1], $data[2], $data[3], $data[4], $data[5], $data[6], $data[7], $data[8], $data[9], $data[10], $data[11], $data[12], $data[13], $data[14], $data[16]} is not in valid format"); } Guidance would be highly appreciated as to how can I throw exception only for data which did not satisfy the if value.

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from Database – Database Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Innovation, Adaptability and Agility Emerge As Common Themes at ACORD LOMA Insurance Forum

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance is blogging from the show floor of the ACORD LOMA Insurance Forum this week. Sessions at the ACORD LOMA Insurance Forum this week highlighted the need for insurance companies to think creatively and be innovative with their technology in order to adapt to continuously shifting market dynamics and drive business efficiency and agility.  LOMA President & CEO Robert Kerzner kicked off the day on Tuesday, citing how the recent downtown and recovery has impacted the insurance industry and the ways that companies are doing business.  He encouraged carriers to look for new ways to deliver solutions and offer a better service experience for consumers.  ACORD President & CEO Gregory Maciag reinforced Kerzner's remarks, noting how the industry's approach to technology and development of industry standards has evolved over the association's 40-year history and cited how the continued rise of mobile computing will change the way many carriers are doing business today and in the future. Drawing from his own experiences, popular keynote speaker and Apple Co-Founder Steve Wozniak continued this theme, delving into ways that insurers can unite business with technology.  "iWoz" encouraged insurers to foster an entrepreneurial mindset in a corporate environment to create a culture of creativity and innovation.  He noted that true innovation in business comes from those who have a passion for what they do.  Innovation was also a common theme in several sessions throughout the day with topics ranging from modernization of core systems, automated underwriting, distribution management, CRM and customer communications management.  It was evident that insurers have begun to move past the "old school" processes and systems that constrain agility, implementing new process models and modern technology to become nimble and more adaptive to the market.   Oracle Insurance executives shared a few examples of how insurers are achieving innovation during our Platinum Sponsor session, "Adaptive System Transformation:  Making Agility More Than a Buzzword." Oracle Insurance Senior Vice President and General Manager Don Russo was joined by Chuck Johnston, vice president, global strategy and alliances, and Srini Venkatasantham, vice president of product strategy.  The three shared how Oracle's adaptive solutions for insurance, with a focus on how the key pillars of an adaptive systems - configurable applications, accessible information, extensible content and flexible process - have helped insurers respond rapidly, perform effectively and win more business. Insurers looking to innovate their business with adaptive insurance solutions including policy administration, business intelligence, enterprise document automation, rating and underwriting, claims, CRM and more stopped by the Oracle Insurance booth on the exhibit floor.  It was a premiere destination for many participating in the exhibit hall tours conducted throughout the day. Finally, red was definitely the color of the evening at the Oracle Insurance "Red Hot" customer celebration at the House of Blues. The event provided a great opportunity for our customers to come together and network with the Oracle Insurance team and their peers in the industry.  We look forward to visiting more with of our customers and making new connections today. Helen Pitts is senior product marketing manager for Oracle Insurance. 

    Read the article

  • Understanding G1 GC Logs

    - by poonam
    The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed. We will be looking at the logs generated with JDK 1.7.0_04 using these options. Option -XX:+PrintGCDetails Here's a sample log of G1 collection generated with PrintGCDetails. 0.522: [GC pause (young), 0.15877971 secs] [Parallel Time: 157.1 ms] [GC Worker Start (ms): 522.1 522.2 522.2 522.2 Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] [Processed Buffers : 2 2 3 2 Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] [GC Worker Other (ms): 0.3 0.3 0.3 0.3 Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] [Clear CT: 0.1 ms] [Other: 1.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.3 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.3 ms] [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)] [Times: user=0.59 sys=0.02, real=0.16 secs] This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time. This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause. 0.522: [GC pause (young), 0.15877971 secs] This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish. Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions. 1.730: [GC pause (mixed), 0.32714353 secs] Let's take a look at all the sub-tasks performed in this Evacuation Pause. [Parallel Time: 157.1 ms] Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms. [GC Worker Start (ms): 522.1 522.2 522.2 522.2Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads. [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads. [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions. [Processed Buffers : 2 2 3 2Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] This tells us the number of Update Buffers (mentioned above) processed by each worker thread. [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set. [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions. [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again. [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] This gives the number of times each thread has offered to terminate. [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] These are the times in milliseconds at which each worker thread stopped. [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] These are the total lifetimes of each worker thread. [GC Worker Other (ms): 0.3 0.3 0.3 0.3Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time. [Clear CT: 0.1 ms] This is the time spent in clearing the Card Table. This task is performed in serial mode. [Other: 1.5 ms] Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially. [Choose CSet: 0.0 ms] Time spent in selecting the regions for the Collection Set. [Ref Proc: 0.3 ms] Total time spent in processing Reference objects. [Ref Enq: 0.0 ms] Time spent in enqueuing references to the ReferenceQueues. [Free CSet: 0.3 ms] Time spent in freeing the collection set data structure. [Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)] This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection. Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection. Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions. 1.416: [GC pause (young) (initial-mark), 0.62417980 secs] ….... 2.042: [GC concurrent-root-region-scan-start] 2.067: [GC concurrent-root-region-scan-end, 0.0251507] 2.068: [GC concurrent-mark-start] 3.198: [GC concurrent-mark-reset-for-overflow] 4.053: [GC concurrent-mark-end, 1.9849672 sec] 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.090: [GC concurrent-cleanup-start] 4.091: [GC concurrent-cleanup-end, 0.0002721] The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause. 2.042: [GC concurrent-root-region-scan-start] This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase. 2.067: [GC concurrent-root-region-scan-end, 0.0251507] End of the concurrent root region scan phase and it lasted for 0.0251507 seconds. 2.068: [GC concurrent-mark-start] Start of the concurrent marking at 2.068 secs from the start of the process. 3.198: [GC concurrent-mark-reset-for-overflow] This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again. 4.053: [GC concurrent-mark-end, 1.9849672 sec] End of the concurrent marking phase and it lasted for 1.9849672 seconds. 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing. 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M. 4.090: [GC concurrent-cleanup-start] This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase. 4.091: [GC concurrent-cleanup-end, 0.0002721] Concurrent cleanup phase took 0.0002721 secs to free up the empty regions. Option -XX:G1PrintRegionLivenessInfo Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle. 26.896: [GC cleanup ### PHASE Post-Marking @ 26.896### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576 Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes. ### type address-range used prev-live next-live gc-eff### (bytes) (bytes) (bytes) (bytes/ms) This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region. ### FREE 0x02e00000-0x02f00000 0 0 0 0.0 This is a Free region. ### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0 Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0. ### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0 This is an Eden region. ### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0 These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set. ### SURV 0x09300000-0x09400000 16384 16384 16384 0.0 This is a Survivor region. ### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 % At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size. Option -XX:+G1PrintHeapRegions G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed. COMMIT/UNCOMMIT events G1HR COMMIT [0x6e900000,0x6ea00000]G1HR COMMIT [0x6ea00000,0x6eb00000] Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address. G1HR UNCOMMIT [0x72700000,0x72800000]G1HR UNCOMMIT [0x72600000,0x72700000] Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address. GC Cycle events G1HR #StartGC 7G1HR CSET 0x6e900000G1HR REUSE 0x70500000G1HR ALLOC(Old) 0x6f800000G1HR RETIRE 0x6f800000 0x6f821b20G1HR #EndGC 7 This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process. G1HR #StartFullGC 17G1HR UNCOMMIT [0x6ed00000,0x6ee00000]G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58G1HR #EndFullGC 17 Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process. ALLOC events G1HR ALLOC(Eden) 0x6e800000 The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread. G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000 Regions being used for the allocation of Humongous object. The object spans over two regions. G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010 Single region being used for the allocation of Humongous object. G1HR COMMIT [0x6ee00000,0x6ef00000]G1HR COMMIT [0x6ef00000,0x6f000000]G1HR COMMIT [0x6f000000,0x6f100000]G1HR COMMIT [0x6f100000,0x6f200000]G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010 Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object. G1HR ALLOC(Old) 0x6f800000 Old region started being used for allocation during GC. G1HR ALLOC(Survivor) 0x6fa00000 Region being used for copying old objects into during a GC. Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries. Other Events G1HR RETIRE 0x6e800000 0x6e87bd98 Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation. Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions: For example: 1. ALLOC(Eden) Foo2. ALLOC(Eden) Bar3. StartGC At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event: 1. ALLOC(Eden) Foo2. RETIRE Foo top3. ALLOC(Eden) Bar4. StartGC G1HR CSET 0x6e900000 Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet. G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858 POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors. POST-COMPACTION events are generated within the Full GC boundary. G1HR CLEANUP 0x6f400000G1HR CLEANUP 0x6f300000G1HR CLEANUP 0x6f200000 These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards. G1HR #StartGC 5G1HR CSET 0x6f400000G1HR CSET 0x6e900000G1HR REUSE 0x6f800000 At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC. G1HR ALLOC-FORCE(Eden) 0x6f800000 A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active. G1HR EVAC-FAILURE 0x6f800000 During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects. When Heap Regions are reclaimed ? It is also worth mentioning when the heap regions in the G1 heap are reclaimed. All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events. All regions with CLEANUP events are reclaimed. After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >