Search Results

Search found 36419 results on 1457 pages for 'sql extended events'.

Page 393/1457 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • String manipulation functions in SQL Server 2000 / 2005

    - by Vipin
    SQL Server provides a range of string manipulation functions. I was aware of most of those in back of the mind, but when I needed to use one, I had to dig it out either from SQL server help file or from google. So, I thought I will list some of the functions which performs some common operations in SQL server. Hope it will be helpful to you all. Len (' String_Expression' ) - returns the length of input String_Expression. Example - Select Len('Vipin') Output - 5 Left ( 'String_Expression', int_characters ) - returns int_characters characters from the left of the String_Expression.     Example - Select Left('Vipin',3), Right('Vipin',3) Output -  Vip,  Pin  LTrim ( 'String_Expression' ) - removes spaces from left of the input 'String_Expression'  RTrim ( 'String_Expression' ) - removes spaces from right of the input 'String_Expression' Note - To removes spaces from both ends of the string_expression use Ltrim and RTrim in conjunction Example - Select LTrim(' Vipin '), RTrim(' Vipin ') , LTrim ( RTrim(' Vipin ')) Output - 'Vipin ' , ' Vipin' , 'Vipin' (Single quote marks ' ' are not part of the SQL output, it's just been included to demonstrate the presence of space at the end of string.) Substring ( 'String_Expression' , int_start , int_length ) - this function returns the part of string_expression. Right ( 'String_Expression', int_characters ) - returns int_characters characters from the right of the String_Expression.

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • SQL Server Configuration Scripting Utility Release 9

    - by Bill Graziano
    There’s another update to my little utility to script a SQL Server’s configuration.  I use this for two purposes.  First, I use it to keep my database mirroring servers up to date.  Second, I capture the output in a version control system and keep that for historical reference. In release 3.0.9 I made the following changes: Rewrote the encrypted trigger scripting.  It will now list the encrypted triggers in a comment in the table script but can’t actually script them. It now scripts any server event notifications. You can script a single database using the /scriptdb flag.  Please note that it will also script the instance and system databases when it does this. It will script any user-defined endpoints.  This will capture your mirroring endpoints and more importantly any service broker endpoints. It will gracefully skip database mail on the Express Edition. It still doesn’t support SQL Server 2012.  I think that’s the next feature to add though.

    Read the article

  • List of events to track for Desired Configuration Management (DCM) in SCCM

    - by user69952
    Hi everyone, I can't seem to find a list of all event IDs related to DCM in SCCM. I managed to find the common ones by testing but I'm sure there are tons of scenarios I haven't seen yet. I'm interested in events related to compliance status, errors, etc in particular such as these: 11854 - Policy is now compliant 11856 - Previously compliant is now non compliant.. Please post any other event IDs you know of related to this.. Thank you

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • Do you have a data roadmap?

    - by BuckWoody
    I often visit companies where they asked me “What is SQL Server’s Roadmap?” What they mean is that they want to know where Microsoft is going with our database products. I explain that we’re expanding not only the capacities in SQL Server but the capabilities – we’re trying to make an “information platform”, rather than just a data store. But it’s interesting when I ask the same question back. “What is your data roadmap?” Most folks are surprised by the question, thinking only about storage and archival. To them, data is data. Ah, not so. Your data is one of the most valuable, if not the most valuable asset in your organization. And you should be thinking about how you’ll acquire it, how it will be distributed, how you’ll archive it (which includes more than just backing it up) and most importantly, how you’ll leverage it. Because it’s only when data becomes information that it is truly useful. to be sure, the folks on the web that collect lots of data have a strategy for it – do you? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Health Check...... ?????????

    - by Steve He(???)
    ??????,??????????,???????????????,????SQL?????,????????????????,????????????????????????????,???????????????????????????????????? ???,????SQLHC,??????SQL??????????,????????(CBO)???,????????,???????????????????Document 1455583.1 ??? SQL Tuning Health-Check ??(SQLHC)???,??????????SQL??????,???????????????SQL????? ???????Recording???SQLHC?????? ????????????,???: SQL Tuning Health-Check Script (SQLHC) (Doc ID 1366133.1) Document 1417774.1 FAQ: SQLHC HealthCheck ??????

    Read the article

  • ??OPEN CURSOR?BULK COLLECT

    - by Liu Maclean(???)
    ????T.askmaclean.com?????bulk collect?open cursor???, ?????????  ??????: ???? OPEN_CURSOR ????SQL?? ???????. ?????? ????? ???????????????? ????? test_soruce create table zengfankun_temp01 as select * from dba_objects;select count(*) from zengfankun_temp01;–12,6826analyze table zengfankun_temp01 compute statistics; create or replace procedure test_open_cursor istype type_owner is table of zengfankun_temp01.owner%type index by binary_integer;type type_object_name is table of zengfankun_temp01.object_name%type index by binary_integer;type type_object_id is table of zengfankun_temp01.object_id%type index by binary_integer;type type_object_type is table of zengfankun_temp01.object_type%type index by binary_integer;type type_last_ddl_time is table of zengfankun_temp01.last_ddl_time%type index by binary_integer; l_ary_owner type_owner;l_ary_object_name type_object_name;l_ary_object_id type_object_id;l_ary_object_type type_object_type;l_ary_last_ddl_time type_last_ddl_time; cursor cur_object isselect owner,object_name,object_id,object_type,last_ddl_timefrom zengfankun_temp01order by owner,object_name,object_type,last_ddl_time;OPEN_START number;OPEN_END number;FETCH_START number;FETCH_END number;beginDBMS_OUTPUT.ENABLE (buffer_size=>null) ;OPEN_START:=dbms_utility.get_time();open cur_object;OPEN_END :=dbms_utility.get_time();dbms_output.put_line(‘OPEN_TIME:’||TO_CHAR(OPEN_END-OPEN_START));loopFETCH_START:=dbms_utility.get_time();fetch cur_object bulk collect intol_ary_owner,l_ary_object_name,l_ary_object_id,l_ary_object_type,l_ary_last_ddl_timelimit 10000;FETCH_END:=dbms_utility.get_time();dbms_output.put_line(‘FETCH_TIME:’||TO_CHAR(FETCH_END-FETCH_START)||’ ROWCOUNT:’||cur_object%rowCount); exit when cur_object%notfound or cur_object%notfound is null;end loop;end test_open_cursor; OPEN_TIME:12FETCH_TIME:21 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:1 ROWCOUNT:126826 ???? OPEN_TIME:0FETCH_TIME:18 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:2 ROWCOUNT:126826 SQL?????????, ????????????.??OPEN CURSOR ????0???????????3??.??N? ??????. ???? ?N? ?????????? ??????. ??????????????? ??????????. ?????????10000??? ???????????????????clear???, ???????????: ?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? ????FETCH ??????????????,???????Current Block, The most recent version of block , ?????SCN >> Snapshot scn, ????UNDO???? ???SCN ???Best Block ,???Read Consistentcy;???? ???UNDO SNAPSHOT???????????????Best Block??,???????ORA-1555??? ????????, ??????????,???????????????char(2000)????, ???????????????,????bulk collect fetch??fetch 10 ???,????????OPEN CURSOR?????PARSE??SQL????????, ??????????fetch bulk collect??????????10????,??”_trace_pin_time”????Server Process?pin CR block???,??????????Fetch Bulk Collect limit 10??10?buffer?pin? [oracle@nas ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Aug 1 11:36:52 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- http://www.askmaclean.com SQL> create table maclean (t1 char(2000)) tablespace users pctfree 99; Table created. SQL> begin 2 for i in 1..200 loop 3 insert into maclean values('MACLEAN'); 4 commit ; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> select count(*) from maclean; COUNT(*) ---------- 200 SQL> select blocks,num_rows from dba_tables where table_name='MACLEAN'; BLOCKS NUM_ROWS ---------- ---------- 244 200 SQL> alter system set "_trace_pin_time"=1 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> alter session set events '10046 trace name context forever,level 12'; Session altered. SQL> SQL> SQL> declare 2 cursor v_cursor is 3 select * from sys.maclean; 4 type v_type is table of sys.maclean%rowtype index by binary_integer; 5 rec_tab v_type; 6 begin 7 open v_cursor; 8 dbms_lock.sleep(30); 9 loop 10 fetch v_cursor bulk collect 11 into rec_tab limit 10; 12 dbms_lock.sleep(10); 13 exit when v_cursor%notfound; 14 end loop; 15 end; 16 / ?????10046 trace+ pin trace: PARSING IN CURSOR #47499559136872 len=337 dep=0 uid=0 oct=47 lid=0 tim=1343836146412056 hv=496860239 ad='11a11dbb0' sqlid='4zh7954ftuz2g' declare cursor v_cursor is select * from sys.maclean; type v_type is table of sys.maclean%rowtype index by binary_integer; rec_tab v_type; begin open v_cursor; dbms_lock.sleep(30); loop fetch v_cursor bulk collect into rec_tab limit 10; dbms_lock.sleep(10); exit when v_cursor%notfound; end loop; end; END OF STMT PARSE #47499559136872:c=0,e=346,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1343836146412051 ===================== PARSING IN CURSOR #47499559126280 len=25 dep=1 uid=0 oct=3 lid=0 tim=1343836146414939 hv=3296884535 ad='11a11d250' sqlid='2mb1493284xtr' SELECT * FROM SYS.MACLEAN END OF STMT PARSE #47499559126280:c=1999,e=2427,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=2568761675,tim=1343836146414937 EXEC #47499559126280:c=0,e=55,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2568761675,tim=1343836146415104 ????? ? SELECT * FROM SYS.MACLEAN? PARSE ????? , ????FETCH???????pin ????????, ????OPEN CURSOR????? *** 2012-08-01 11:49:36.424 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 30009361 duration=0 p2=0 p3=0 obj#=-1 tim=1343836176424782 ???30s pin ktewh26: kteinpscan dba 0x10a6202:4 time 1039048805 pin ktewh27: kteinmap dba 0x10a6202:4 time 1039048847 pin kdswh11: kdst_fetch dba 0x10a6203:1 time 1039048898 pin kdswh11: kdst_fetch dba 0x10a6204:1 time 1039048961 pin kdswh11: kdst_fetch dba 0x10a6205:1 time 1039049004 pin kdswh11: kdst_fetch dba 0x10a6206:1 time 1039049042 pin kdswh11: kdst_fetch dba 0x10a6207:1 time 1039049089 pin kdswh11: kdst_fetch dba 0x10a6208:1 time 1039049123 pin kdswh11: kdst_fetch dba 0x10a6209:1 time 1039049159 pin kdswh11: kdst_fetch dba 0x10a620a:1 time 1039049191 pin kdswh11: kdst_fetch dba 0x10a620b:1 time 1039049225 pin kdswh11: kdst_fetch dba 0x10a620c:1 time 1039049260 kdst_fetch???fetch??????? , ??fetch?10?? ???????FETCH FETCH #47499559126280:c=0,e=536,p=0,cr=12,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836176425542 *** 2012-08-01 11:49:46.428 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002694 duration=0 p2=0 p3=0 obj#=-1 tim=134383618642829 ????10s pin kdswh11: kdst_fetch dba 0x10a620d:1 time 1049052211 pin kdswh11: kdst_fetch dba 0x10a620e:1 time 1049052264 pin kdswh11: kdst_fetch dba 0x10a620f:1 time 1049052299 pin kdswh11: kdst_fetch dba 0x10a6211:1 time 1049052332 pin kdswh11: kdst_fetch dba 0x10a6212:1 time 1049052364 pin kdswh11: kdst_fetch dba 0x10a6213:1 time 1049052398 pin kdswh11: kdst_fetch dba 0x10a6214:1 time 1049052430 pin kdswh11: kdst_fetch dba 0x10a6215:1 time 1049052462 pin kdswh11: kdst_fetch dba 0x10a6216:1 time 1049052494 pin kdswh11: kdst_fetch dba 0x10a6217:1 time 1049052525 FETCH #47499559126280:c=0,e=371,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836186428807 ??pin 10????, ???fetch ?? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002864 duration=0 p2=0 p3=0 obj#=-1 tim=1343836196431754 pin kdswh11: kdst_fetch dba 0x10a6218:1 time 1059055662 pin kdswh11: kdst_fetch dba 0x10a6219:1 time 1059055714 pin kdswh11: kdst_fetch dba 0x10a621a:1 time 1059055748 pin kdswh11: kdst_fetch dba 0x10a621b:1 time 1059055781 pin kdswh11: kdst_fetch dba 0x10a621c:1 time 1059055815 pin kdswh11: kdst_fetch dba 0x10a621d:1 time 1059055848 pin kdswh11: kdst_fetch dba 0x10a621e:1 time 1059055883 pin kdswh11: kdst_fetch dba 0x10a621f:1 time 1059055915 pin kdswh11: kdst_fetch dba 0x10a6221:1 time 1059055953 pin kdswh11: kdst_fetch dba 0x10a6222:1 time 1059055992 FETCH #47499559126280:c=0,e=385,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836196432274 ???? ??????? DBA????? ............................ ???? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002933 duration=0 p2=0 p3=0 obj#=-1 tim=1343836366495589 pin kdswh11: kdst_fetch dba 0x10a62f6:1 time 1229119497 pin kdswh11: kdst_fetch dba 0x10a62f7:1 time 1229119545 pin kdswh11: kdst_fetch dba 0x10a62f8:1 time 1229119576 pin kdswh11: kdst_fetch dba 0x10a62f9:1 time 1229119610 pin kdswh11: kdst_fetch dba 0x10a62fa:1 time 1229119644 pin kdswh11: kdst_fetch dba 0x10a62fb:1 time 1229119671 pin kdswh11: kdst_fetch dba 0x10a62fc:1 time 1229119703 pin kdswh11: kdst_fetch dba 0x10a62fd:1 time 1229119730 pin kdswh11: kdst_fetch dba 0x10a62fe:1 time 1229119760 pin kdswh11: kdst_fetch dba 0x10a62ff:1 time 1229119787 FETCH #47499559126280:c=0,e=340,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836366496067 ??????DBA? 0x10a6203 , ??DBA ? 0x10a62ff ???????DBA??MACLEAN????????,???DBA???Maclean????? getbfno?????dba??????????? CREATE OR REPLACE FUNCTION getbfno (p_dba IN VARCHAR2) RETURN VARCHAR2 IS l_str VARCHAR2 (255) DEFAULT NULL; l_fno VARCHAR2 (15); l_bno VARCHAR2 (15); BEGIN l_fno := DBMS_UTILITY.data_block_address_file (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_bno := DBMS_UTILITY.data_block_address_block (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_str := 'datafile# is:' || l_fno || CHR (10) || 'datablock is:' || l_bno || CHR (10) || 'dump command:alter system dump datafile ' || l_fno || ' block ' || l_bno || ';'; RETURN l_str; END; / Function created. SQL> select getbfno('0x10a6203') from dual; GETBFNO('0X10A6203') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680451 dump command:alter system dump datafile 4 block 680451; SQL> select getbfno('0x10a62ff') from dual; GETBFNO('0X10A62FF') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680703 dump command:alter system dump datafile 4 block 680703; SQL> select dbms_rowid.rowid_block_number(min(rowid)),dbms_rowid.rowid_relative_fno(min(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MIN(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MIN(ROWID)) ----------------------------------------- 680451 4 SQL> select dbms_rowid.rowid_block_number(max(rowid)),dbms_rowid.rowid_relative_fno(max(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MAX(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MAX(ROWID)) ----------------------------------------- 680703 4 ???????3???: 1.?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? 2.????FETCH ?????????????? 3. ???open cursor+ fetch bulk collect???”?????????”

    Read the article

  • java.sql.SQLException: Parameter number X is not an OUT parameter

    - by Frederik
    Hi guys, I'm struggling with getting the result OUT variable from a MySQL stored procedure. I get the following error: java.sql.SQLException: Parameter number 3 is not an OUT parameter The stored procedure looks like this: CREATE DEFINER=`cv_admin`@`localhost` PROCEDURE `CheckGameEligibility`( IN gID INT(10), IN uID INT(10), OUT result TINYINT(1) ) BEGIN # Do lots of stuff, then eventually: SET result = 1; END My java function takes an array of strings* and creates the CallableStatement object dynamically: public static int callAndReturnResult( String sql , String[] values ) { int out = 0 ; try { // construct the SQL. Creates: CheckGameEligibility(?, ?, ?) sql += "(" ; for( int i = 0 ; i < values.length ; i++ ) { sql += "?, " ; } sql += "?)" ; System.out.println( "callAndReturnResult("+sql+"): constructed SQL: " + sql ); // Then the statement CallableStatement cstmt = DB.prepareCall( sql ); for( int i = 0 ; i < values.length ; i++ ) { System.out.println( " " + (i+1) + ": " + values[ i ] ) ; cstmt.setString(i+1, values[ i ] ); } System.out.println( " " + (values.length+1) + ": ? (OUT)" ) ; cstmt.registerOutParameter( values.length + 1 , Types.TINYINT ); cstmt.execute(); out = cstmt.getInt( values.length ); cstmt.close(); } catch( Exception e ) { System.out.println( "*** db trouble: callAndReturnResult(" + sql + " failed: " + e ); e.printStackTrace() ; } return out ; } *) I suppose I should be using an int array instead of a string array, but it doesn't seem to be what the error message was about. Anyway, here's the output it generates: callAndReturnResult(CheckGameEligibility(?, ?, ?)): constructed SQL: CheckGameEligibility(?, ?, ?) 1: 57 2: 29 3: ? (OUT) *** db trouble: callAndReturnResult(CheckGameEligibility(?, ?, ?) failed: java.sql.SQLException: Parameter number 3 is not an OUT parameter at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929) at com.mysql.jdbc.CallableStatement.checkIsOutputParam(CallableStatement.java:692) at com.mysql.jdbc.CallableStatement.registerOutParameter(CallableStatement.java:1847) at org.apache.commons.dbcp.DelegatingCallableStatement.registerOutParameter(DelegatingCallabl>eStatement.java:92) at Tools.callAndReturnResult(Tools.java:156) Any ideas what might be the problem? :) Thanks in advance!

    Read the article

  • Use Google Calendar UI but showing only filtered events

    - by Edwin
    I have just started using Google Calendar API (using Python client). I'm basically developing a web app for a school with Django. What I'd like to achieve is something like this: To make things simple for now, I have 1 Google account and all events will be created in the calendar under that account (this is the school calendar). The calendar will be made public. When a class is created by a teacher, the class schedule will be automatically added as an event in the Google Calendar. When a student logs in, he can see the school calendar, showing only schedules from the classes that he's registered in. I think I can filter the calendar feeds to show only class schedules that a student is registered in using Google Data API. The problem is, how can I display Google Calendar on my web app using Google Calendar UI to show only those filtered events? I can use Google Calendar UI with the provided embeddable HTML snippet, but I can't control/filter events with that (i.e. all events in the school calendar will be displayed). Or perhaps I'm missing something? I read the Data API guide and the Publishing tool doc but I can't seem to find this information. THanks in advance!

    Read the article

  • How to create a semi transparent window in WPF that allows mouse events to pass through

    - by RMK
    I am trying to create an effect similar to the Lights out /lights dim feature in Adobe Lightroom (http://www.youtube.com/watch?v=87hNd3vaENE) except in WPF. What I tried was to create another window over-top of my existing window, make it transparent and put a semi transparent Path geometry on it. But I want mouse events to be able to pass through this semi transparent window (on to windows below). This is a simplified version of what I have: <Window x:Class="LightsOut.MaskWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" AllowsTransparency="True" WindowStyle="None" ShowInTaskbar="False" Topmost="True" Background="Transparent"> <Grid> <Button HorizontalAlignment="Left" Height="20" Width="60">click</Button> <Path IsHitTestVisible="False" Stroke="Black" Fill="Black" Opacity="0.3"> <Path.Data> <RectangleGeometry Rect="0,0,1000,1000 "/> </Path.Data> </Path> </Grid> The window is fully transparent, so on places where the Path doesn't cover, mouse events pass right through. So far so good. The IsHitTestvisible is set to false on the path object. So mouse events will pass through it to other controls on the same form (ie you can click on the Button, because it is on the same form). But mouse events wont pass through the Path object onto windows that are below it. Any ideas? Or better ways to solve this problem? Thanks.

    Read the article

  • Getting the eventargs of registered events

    - by Bjorn Vdkerckhove
    i'm new with maps and openlayers, but i'm playing around with openlayers because i'll need mapfunctionality in my next project. The map is an image (because it's a drawn map of a medieval town). I found how i can register events, and it's working. But the problem is, that the "eventargs" is not working as in the examples i found. In one of the examples they are getting the x and y values after the users panned like this: map.events.register('moveend', map, function (e) { alert(e.xy); }); If i try this in visual studio, e doesn't have a 'xy' property. What am i missing? This is the code i have right now: <script type="text/javascript"> var map, layer; function init() { var windowHeight = $(window).height(); var windowWidth = $(window).width(); var mapdiv = $('#map'); mapdiv.css({width: windowWidth + 'px', height: windowHeight + 'px'}); map = new OpenLayers.Map('map', { maxResolution: 1000 }); layer = new OpenLayers.Layer.Image( 'Globe ESA', '[url]', new OpenLayers.Bounds(-180.0, -12333.5, 21755.5, 90.0), new OpenLayers.Size(windowWidth, windowHeight), { numZoomLevels: 100 }); map.addLayer(layer); nav = new OpenLayers.Control.Navigation(); map.addControl(nav); //events test map.events.register('moveend', map, function (e) { alert(e.xy); }); map.zoomToMaxExtent(); } </script> In the examples of openlayer, they don't use the eventargs, but there must be a way to get the zoomlevel, or the x and y after panning, right? Thank you!

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • 24 Hours of PASS – first reflections

    - by Rob Farley
    A few days after the end of 24HOP, I find myself reflecting on it. I’m still waiting on most of the information. I want to be able to discover things like where the countries represented on each of the sessions, and things like that. So far, I have the feedback scores and the numbers of attendees. The data was provided in a PDF, so while I wait for it to appear in a more flexible format, I’ve pushed the 24 attendee numbers into Excel. This chart shows the numbers by time. Remember that we started at midnight GMT, which was 10:30am in my part of the world and 8pm in New York. It’s probably no surprise that numbers drooped a bit at the start, stayed comparatively low, and then grew as the larger populations of the English-speaking world woke up. I remember last time 24HOP ran for 24 hours straight, there were quite a few sessions with less than 100 attendees. None this time though. We got close, but even when it was 4am in New York, 8am in London and 7pm in Sydney (which would have to be the worst slot for attracting people), we still had over 100 people tuning in. As expected numbers grew as the UK woke up, and even more so as the US did, with numbers peaking at 755 for the “3pm in New York” session on SQL Server Data Tools. Kendra Little almost reached those numbers too, and certainly contributed the biggest ‘spike’ on the chart with her session five hours earlier. Of all the sessions, Kendra had the highest proportion of ‘Excellent’s for the “Overall Evaluation of the session” question, and those of you who saw her probably won’t be surprised by that. Kendra had one of the best ranked sessions from the 24HOP event this time last year (narrowly missing out on being top 3), and she has produced a lot of good video content since then. The reports indicate that there were nearly 8.5 thousand attendees across the 24 sessions, averaging over 350 at each one. I’m looking forward to seeing how many different people that was, although I do know that Wil Sisney managed to attend every single one (if you did too, please let me know). Wil even moderated one of the sessions, which made his feat even greater. Thanks Wil. I also want to send massive thanks to Dave Dustin. Dave probably would have attended all of the sessions, if it weren’t for a power outage that forced him to take a break. He was also a moderator, and it was during this session that he earned special praise. Part way into the session he was moderating, the speaker lost connectivity and couldn’t get back for about fifteen minutes. That’s an incredibly long time when you’re in a live presentation. There were over 200 people tuned in at the time, and I’m sure Dave was as stressed as I was to have a speaker disappear. I started chasing down a phone number for the speaker, while Dave spoke to the audience. And he did brilliantly. He started answering questions, and kept doing that until the speaker came back. Bear in mind that Dave hadn’t expected to give a presentation on that topic (or any other), and was simply drawing on his SQL expertise to get him through. Also consider that this was between midnight at 1am in Dave’s part of the world (Auckland, NZ). I would’ve been expecting just to welcome people, monitor questions, probably read some out, and in general, help make things run smoothly. He went far beyond the call of duty, and if I had a medal to give him, he’d definitely be getting one. On the whole, I think this 24HOP was a success. We tried a different platform, and I think for the most part it was a popular move. We didn’t ask the question “Was this better than LiveMeeting?”, but we did get a number of people telling us that they thought the platform was very good. Some people have told me I get a chance to put my feet up now that this is over. As I’m also co-ordinating a tour of SQLSaturday events across the Australia/New Zealand region, I don’t quite get to take that much of a break (plus, there’s the little thing of squeezing in seven SQL 2012 exams over the next 2.5 weeks). But I am pleased to be reflecting on this event rather than anticipating it. There were a number of factors that could have gone badly, but on the whole I’m pleased about how it went. A massive thanks to everyone involved. If you’re reading this and thinking you wish you could’ve tuned in more, don’t worry – they were all recorded and you’ll be able to watch them on demand very soon. But as well as that, PASS has a stream of content produced by the Virtual Chapters, so you can keep learning from the comfort of your desk all year round. More info on them at sqlpass.org, of course.

    Read the article

  • [SQLServer JDBC Driver][SQLServer]Could not find stored procedure 'master..xp_jdbc_open2'.

    - by Vijaya Moderator -Oracle
    When connecting to MS SQL Server Database via Weblogic Datasource and using XA jdbc driver, the following error is thrown. <Jun 3, 2014 5:16:49 AM PDT> <Error> <Console> <BEA-240003> <Console encountered the following error java.sql.SQLException: [FMWGEN][SQLServer JDBC Driver][SQLServer]Could not find stored procedure 'master..xp_jdbc_open2'. at weblogic.jdbc.sqlserverbase.ddb_.b(Unknown Source)at weblogic.jdbc.sqlserverbase.ddb_.a(Unknown Source)at weblogic.jdbc.sqlserverbase.ddb9.b(Unknown Source)at weblogic.jdbc.sqlserverbase.ddb9.a(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddr.v(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddq.a(Unknown Source)at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)at weblogic.jdbc.sqlserver.ddj.m(Unknown Source)at weblogic.jdbc.sqlserverbase.ddel.e(Unknown Source)at weblogic.jdbc.sqlserverbase.ddel.a(Unknown Source)  The cause behind the issue is that  the MS SQL Server was not installed with the Stored procedures to enable JTA/XA Solution To connect to SQL Server via XA Driver from WLS Datasource you need to install Stored Procedures for JTATo use JDBC distributed transactions through JTA, your system administrator should use the following procedure to install Microsoft SQL Server JDBC XA procedures. This procedure must be repeated for each MS SQL Server installation that will be involved in a distributed transaction.To install stored procedures for JTA:1. Copy the appropriate sqljdbc.dll and instjdbc.sql files from the WL_HOME\server\lib directory to the SQL_Server_Root/bin directory of the MS SQL Server database server, where WL_HOME is the directory in which WebLogic server is installed, typically c:\Oracle\Middleware\wlserver_10.x.  Note:  If you are installing stored procedures on a database server with multiple Microsoft SQL Server instances, each running SQL Server instance must be able to locate the sqljdbc.dll file.Therefore the sqljdbc.dll file needs to be anywhere on the global PATH or on the application-specific path. For the application-specific path, place the sqljdbc.dll file into the :\Program Files\Microsoft SQL Server\MSSQL$\Binn directory for each instance. 2. From the database server, use the ISQL utility to run the instjdbc.sql script. As a precaution, have your system administrator back up the master database before running instjdbc.sql. At a command prompt, use the following syntax to run instjdbc.sql:  ISQL -Usa -Psa_password -Sserver_name -ilocation\instjdbc.sql  where:  sa_password is the password of the system administrator.  server_name is the name of the server on which SQL Server resides.  location is the full path to instjdbc.sql. (You copied this script to the SQL_Server_Root/bin directory in step 1.)  The instjdbc.sql script generates many messages. In general, these messages can be ignored; however, the system administrator should scan the output for any messages that may indicate an execution error. The last message should indicate that instjdbc.sql ran successfully. The script fails when there is insufficient space available in the master database to store the JDBC XA procedures or to log changes to existing procedures.

    Read the article

  • Partner Webcast: Innovation in Products - October 1st, 2012 at 04:00 PM CET (03:00 PM GMT) Program

    - by Richard Lefebvre
    I am pleased to invite you to join the Innovations in Products – webcast. Innovations in Products will present Oracle Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle's product portfolio – for your and your customer's benefit. Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall product portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Product breakout sessions available on October 1st:   Topics Speaker To Register Fusion HCM Social Capabilities, Enterprise social capabilities embedded in how you run your business Anca Dumitru, HCM Presales Consultant, EMEA Presales Center CLICK HERE Oracle Fusion Applications Security Concepts, Overview Alexandra Dan, Applications Technology Presales Consultant, EMEA Presales Centre CLICK HERE Fusion Financials Overview - Focus on Fusion Payables, Meeting the Payables' Challenges Elena Nita, Senior ERP Sales Consultant, EMEA Presales Center CLICK HERE Introduction to Oracle RightNow CX, Empowering companies to engage directly with their customers through great Social, Web, Chat and Contact Center experiences. Cais Champsi, Presales Consultant, EMEA Presales Centre CLICK HERE Oracle Endeca Information Discovery, Product Overview Emma Palii, BI Sales Consultant, EMEA Presales Center CLICK HERE To access previously presented 23 Applications Products presentations and 6 Public Sector Value Proposition presentations, please click here. You might want to bookmark the overall registration page Innovations in Products October 1st and the global event calendar page events.oracle.com. Delivery Format Innovations in Products – program is a series of FREE prerecorded Oracle product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you'll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months. The next Innovations in Products web casts will be presented as follows: October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. Duration: Maximum 1 hour For further information please contact me Markku Rouhiainen. Best regards Markku Rouhiainen Director, Applications Partner Enablement EMEA

    Read the article

  • A Case for Oracle Fusion Middleware by Lucas Jellema

    - by JuergenKress
    An in-depth look at the interaction of people, processes, and technologies in the transition to a service-oriented architecture. Author's Note This article presents a profile of a fictitious organization, NOPERU. The story of NOPERU as told in this article is actually a collage of the events at some dozen organizations that I have been involved with over the past few years. None of these organizations sport all the characteristics of NOPERU - but all of them have gone through or are going through a similar transition as described here and all aspects of this article were taken from real life at one or usually many of these organizations. Background NOPERU (National Organization for Permits for Emissions and Resource Usage) is a public organization that continues to transform in terms of its business, organization and technology. Changing business requirements; new interaction channels; and increasing demands for more flexibility, faster throughput and lower costs drive these transformations, while technological evolution and new architecture patterns enable the change. NOPERU chose Oracle Fusion Middleware as the technology platform to implement the new architecture and required applications. This article takes a close look at NOPERU's journey from its origins in the early 1990s as a largely paper-based entity with regional databases and client-server Oracle Forms applications. Its upcoming business objectives are introduced: what is required of the organization and what the higher goals behind these requirements are. The architecture roadmap is described at a high level as well as drilled down to a service oriented design. Based on the architecture roadmap and the business requirements and NOPERU went through a technology selection to determine the technology stack with which the future would be realized in terms of IT. The article discusses that selection and details the projects subsequently planned (and executed to date). The new architecture and technology as well as the introduction of an Agile development method have had substantial consequences for the IT organization, the processes and individual staff members. The approach NOPERU has adopted with regard to the people and the organization is portrayed. Finally, the article discusses many conclusions that NOPERU has drawn that may benefit itself and other organizations. Introducing NOPERU NOPERU is a national organization charged with issuing permits for excessive emissions (i.e., carbon dioxide) and disproportionate usage of such resources as energy or water. Anyone-whether a commercial enterprise, government agency or private person--who emits or consumes more than what is considered "fair usage" requires such a permit. When someone builds an outdoor heated swimming pool, for example, or open-air terrace heating, such a permit needs to be obtained. When a company installs new, energy-intensive equipment, such as water boilers or deep freezers, it too needs to get a NOPERU permit. Government-sponsored projects at every level that involve consumption of large quantities of fresh water or production of high volumes of emissions must turn to NOPERU for a permit. Without the required license, any interested party can get a court to immediately put a stop to the disputed activity. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Lucas Jellema,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Visio 2010 forward engineer add-in for office 2010

    - by Ryan Ternier
    I have been scouring the internet for ages trying to see if there was a usable add-on for Visio 2010 that could export SQL Scripts. MS stopping putting that functionality in Visio since 2003 – which is a huge shame. Today I found an open source project from Alberto Ferrari. It’s an add-in for Visio 2010 that allows you to generate SQL Scripts from your DB diagram. It’s still in beta, and the source is available.   Check it out here:http://sqlblog.com/blogs/alberto_ferrari/archive/2010/04/16/visio-forward-engineer-addin-for-office-2010.aspx This saves me from having to do all my diagramming in SQL Server / VS 2010. And brings back the much needed functionality that has been lost.

    Read the article

  • Passing touch events on to subviews

    - by Egil Jansson
    I have a view within a UIScrollView that loads an additional subview when the user presses a certain area. When this additional subview is visible, I want all touch events to be handled by this - and not by the scrollview. It seems like the first couple events are being handled by the subview, but then touchesCancelled is called and the scrollview takes over the touch detection. How can I make sure that the subview gets all the events as long as the movement activity is being performed on this view? This is my implementation on touchesMoved - which I thought would do the job... -(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[touches allObjects] objectAtIndex:0]; CGPoint touchPt = [touch locationInView:self]; UIView *hitView = [self hitTest:touchPt withEvent:event]; UIView *mySubView = subviewCtrl.view; if(hitView == mySubView) { [subviewCtrl.view touchesMoved:touches withEvent:event]; } else { NSLog(@"Outside of view..."); } }

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >