Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 401/679 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Oracle Open World - 30. September - 4. October 2012, San Francisco, USA

    - by Richard Lefebvre
    Organization for Oracle OpenWorld  (30. September - 4. October 2012, San Francisco, USA) has started. Watch out for further information to come in the coming weeks. Exhibition and Sponsorship Opportunities Exhibiting, sponsoring and advertising at Oracle OpenWorld 2012 is the best opportunity for Oracle partners to achieve critical marketing and sales objectives. As the world's preeminent Oracle conference, Oracle OpenWorld attracts influential users and decision-makers from customer organizations globally. Exhibition and sponsorship opportunities are filling up quickly, so partners should explore exhibition, branding and sponsorship opportunities now. Register NOW and Save - Super Saver Period Ends 30. March You should register today to save $800 on their Oracle OpenWorld full conference pass. There will not be any cheaper prices available afterwards! Partners and Customers - Call For Papers Opens 14. March Oracle OpenWorld Call for Papers will open Wednesday, 14. March. Speak your mind to the world's largest gathering of the most knowledgeable IT decision-makers, leading-edge developers, and advanced technologists. Don't delay - the call for papers closes 11:59 PM PST on 9. April 2012.

    Read the article

  • Oracle GoldenGate: Knowledge Document Series Post #2

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For our second post in this series the team would like to highlight the knowledge document “How-To: Oracle GoldenGate – Heartbeat Process to Monitor Lag and Performance”. This knowledge document outlines a procedure to reliably measure lag between source and target systems through the use of 'heartbeat' tables. The basic idea is to have a table on the source system that gets updated at a predetermined interval. In your capture processes you would capture the update from the heartbeat table. Using tokens you would add some additional information to the heartbeat record to be able to tell which extract process was capturing the update. This additional information would be used downstream to calculate the real lag time between the source and target systems for a given extract and by checking the last update time on the heartbeat at the target you could also determine if data has stopped flowing between the source and target.  Click here to view the document

    Read the article

  • JavaOne 2012 Conference Preview

    - by Janice J. Heiss
    A new article, by noted freelancer Steve Meloan, now up on otn/java, titled “JavaOne 2012 Conference Preview,” looks ahead to the fast approaching JavaOne 2012 Conference, scheduled for September 30-October 4 in San Francisco. The Conference will celebrate and highlight one of the world’s leading technologies. As Meloan states, “With 9 million Java developers worldwide, 5 billion Java cards in use, 3 billion mobile phones running Java, 1 billion Java downloads each year, and 100 percent of Blu-ray disk players and 97 percent of enterprise desktops running Java, Java is a technology that literally permeates our world.” The 2012 JavaOne is organized under seven technical tracks: * Core Java Platform* Development Tools and Techniques* Emerging Languages on the JVM* Enterprise Service Architectures and the Cloud* Java EE Web Profile and Platform Technologies* Java ME, Java Card, Embedded, and Devices* JavaFX and Rich User Experiences Conference keynotes will lay out the Java roadmap. For the Sunday keynote, such Oracle luminaries as Cameron Purdy, Vice President of Development; Nandini Ramani, Vice President of Engineering, Java Client and Mobile Platforms; Richard Bair, Chief Architect, Client Java Platform; and Mark Reinhold, Chief Architect, Java Platform will be presenting. For the Thursday IBM keynote, Jason McGee, Distinguished Engineer and Chief Architect for IBM PureApplication System, and John Duimovich, Java CTO and IBM Distinguished Engineer, will explore Java and IBM's cloud-based initiatives.All in all, the JavaOne 2012 Conference should be as exciting as ever. Link to the article here.

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • ArchBeat Link-o-Rama for July 2, 2013

    - by Bob Rhubart
    One Week To Go: OTN Architect Day: Cloud Computing - July 9, 2013, Redwood Shores, CA. The first OTN Architect Day event of 2013 happens in just one week, on Tuesday July 9 at the Oracle Conference Center in Redwood Shores, CA. Registration is free and you get three sessions by three experts on cloud computing in the real world — plus a panel Q&A for answers to all of your questions. Register now! Oracle Database 12c: Flashback Moving Forward | Lucas Jellema Oracle ACE Director Lucas Jellema's latest of several recent blog posts dealing with various aspects of the recently released Oracle Database 12c. Detroit, Embracing New Auto Technologies, Seeks App Builders This story from the New York Times paints a rosy picture indeed for app developers as the internet of things continues to evolve. Advanced View Criteria Implementation in ADF BC | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post focuses on advanced declarative View Criteria features. JDeveloper: Showing a Popup when Selecting an af:selectOneRadio | Timo Hahn Oracle ACE Timo Hahn illustrates a use case in which a popup is displayed each time the user clicks on one of the radio buttons of a button group. Can Technology Innovation Save The New York Times? One of the standout keynotes from the recent QCon New York event, this presentation by New York Times Sr. VP/CIO Marc Frons and CTO/VP Rajiv Pant paints a detailed portrait of the complete transformation of an organization -- not just the IT. Enterprise architects will find this particularly interesting. Video: Meet Growing IT Demand for Databases with Private DBaaS Do you understand the difference between traditional database deployment and database as a service? If not, you'll want to check out this video, which includes an overview of Oracle Enterprise Manager's capabilities for rapid deployment of DBaaS. S Webcast: Zero-Downtime Migration to Oracle Exadata Using Oracle GoldenGate: A Customer Case Study Presenters Alok Pareek (VP, Product Management/Development, Oracle Data Integration) and John F. Martin (CEO of Emerging Markets and CTO IQNavigator) discuss how IQNavigator is using Oracle GoldenGate with Oracle Exadata. Free eBook: Building a Database Cloud for Dummies This free quick-reference guide, organized into six short chapters and supplemented with helpful illustrations, provides a clear overview of the cloud and step-by-step instructions on deploying database as a service. (Registration required.) Thought for the Day "My motto is: Live every day to the fullest – in moderation." — Lindsay Lohan (Born July 2, 1986) Source: brainyquote.com

    Read the article

  • October 2012 Chicago IT Architects Group Meeting Recap

    - by Tim Murphy
    It seemed very ironic that the day we have a presentation on the architecture of building applications for Windows 8 the Surface tablet is opened for pre-order.  Tom Benton started the evening enlightening the attendees on the user experience for those who had not seen it yet.  He even passed around his table from last year’s Build conference for everyone to play with.  This was followed with a tour of the capabilities and structures that make up a Windows Store App on Windows 8.  Taking it to its conclusion, he rounded out the discussion by covering the certification and deployment process. As usual it was great to see a lot of familiar faces last night.  We are always looking for more people to join in our discussions.  Stay tuned here for announcements up upcoming meetings and topics.  Also, if you have a topic you would like to present or see presented feel free to contact me through this blog. del.icio.us Tags: Chicago Information Technology Architects Group,CITAG,Winodws 8,Windows Store,Tom Benton

    Read the article

  • Outside Operations in JD Edwards EnterpriseOne Manufacturing

    - by Amit Katariya
    Upcoming E1 Manufacturing webcasts   Date: March 30, 2010Time: 10:00 am MDTProduct Family: JD Edwards EnterpriseOne Manufacturing   Summary This one-hour session is recommended for functional users who would like to understand the Outside Operations process overview, including Setup, Execution and Troubleshooting.   Topics will include: Concept Setup in context of PDM, SFC, Product Costing, and Manufacturing Accounting Processing Troubleshooting   A short, live demonstration (only if applicable) and question and answer period will be included. Register for this session Oracle Advisor is dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Important links related to Webcasts Advisor Webcast Current Schedule Advisor Webcast Archived Recordings Above links requires valid access to My Oracle Support

    Read the article

  • Planning Bulletin for JRE 7: What EBS Customers Can Do Today

    - by Steven Chan (Oracle Development)
    An initiative to certify Oracle E-Business Suite with JRE 7 desktop clients is underway.  We have tested EBS 11.5.10.2, 12.0, and 12.1 with JRE 7. We have fixes for nearly all of the compatibility issues now, and are working hard to produce the remaining fixes quickly. A. When will JRE 7 be certified with Oracle E-Business Suite? We will announce the certification of the E-Business Suite with JRE 7 once all fixes are available, verified, and documented.  Certification announcements will be distributed through My Oracle Support, My Oracle Support Community Forums, Oracle Technology Network Forums, newsletters, and user group news outlets. Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates.  In addition to the standard communication channels listed above, customers are encouraged to monitor or subscribe to this blog.    B. What can customers do to prepare for the JRE 7 certification? Customers should ensure that they are on the latest available JRE 6 update. Of the compatibility issues identified so far with JRE 7, the most critical is an issue that prevents E-Business Suite Forms-based products from launching on Windows desktops that are running JRE 7.  Customers can prevent this issue today by ensuring that they have applied the latest certified patches documented for JRE 6 configurations to their EBS application tier servers: Apply Forms patch 14615390 to EBS 11i environments (Note 125767.1) Apply Forms patch 14614795 to EBS 12.0 and 12.1 environments (Note 437878.1) These patches are compatible with JRE 6 and 7, production ready, and fully-tested with the E-Business Suite.  These patches may be applied immediately to all E-Business Suite environments. All other Forms prerequisites documented in the Notes above should also be applied now. C. What else will be required by the final certified configuration? Oracle expects that all other compatibility issues will be resolved by installing a specific JRE 7 release, at minimum.  That specific release has not been finalized yet, and this is subject to change. D.  Where will the official patch requirements be documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 will be documented in updates to these published Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) Disclaimer The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Database Rebuild

    - by Robert May
    I promised I’d have a simpler mechanism for rebuilding the database.  Below is a complete MSBuild targets file for rebuilding the database from scratch.  I don’t know if I’ve explained the rational for this.  The reason why you’d WANT to do this is so that each developer has a clean version of the database on their local machine.  This also includes the continuous integration environment.  Basically, you can do whatever you want to the database without fear, and in a minute or two, have a completely rebuilt database structure. DBDeploy (including the KTSC build task for dbdeploy) is used in this script to do change tracking on the database itself.  The MSBuild ExtensionPack is used in this target file.  You can get an MSBuild DBDeploy task here. There are two database scripts that you’ll see below.  First is the task for creating an admin (dbo) user in the system.  This script looks like the following: USE [master] GO If not Exists (select Name from sys.sql_logins where name = '$(User)') BEGIN CREATE LOGIN [$(User)] WITH PASSWORD=N'$(Password)', DEFAULT_DATABASE=[$(DatabaseName)], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF END GO EXEC master..sp_addsrvrolemember @loginame = N'$(User)', @rolename = N'sysadmin' GO USE [$(DatabaseName)] GO CREATE USER [$(User)] FOR LOGIN [$(User)] GO ALTER USER [$(User)] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'$(User)' GO The second creates the changelog table.  This script can also be found in the dbdeploy.net install\scripts directory. CREATE TABLE changelog ( change_number INTEGER NOT NULL, delta_set VARCHAR(10) NOT NULL, start_dt DATETIME NOT NULL, complete_dt DATETIME NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL ) GO ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set) GO Finally, Here’s the targets file. <Projectxmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Update">   <PropertyGroup>     <DatabaseName>TestDatabase</DatabaseName>     <Server>localhost</Server>     <ScriptDirectory>.\Scripts</ScriptDirectory>     <RebuildDirectory>.\Rebuild</RebuildDirectory>     <TestDataDirectory>.\TestData</TestDataDirectory>     <DbDeploy>.\DBDeploy</DbDeploy>     <User>TestUser</User>     <Password>TestPassword</Password>     <BCP>bcp</BCP>     <BCPOptions>-S$(Server) -U$(User) -P$(Password) -N -E -k</BCPOptions>     <OutputFileName>dbDeploy-output.sql</OutputFileName>     <UndoFileName>dbDeploy-output-undo.sql</UndoFileName>     <LastChangeToApply>99999</LastChangeToApply>   </PropertyGroup>     <ImportProject="$(MSBuildExtensionsPath)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks"/>   <UsingTask TaskName="Ktsc.Build.DBDeploy" AssemblyFile="$(DbDeploy)\Ktsc.Build.dll"/>   <ItemGroup>     <VariableInclude="DatabaseName">       <Value>$(DatabaseName)</Value>     </Variable>     <VariableInclude="Server">       <Value>$(Server)</Value>     </Variable>     <VariableInclude="User">       <Value>$(User)</Value>     </Variable>     <VariableInclude="Password">       <Value>$(Password)</Value>     </Variable>   </ItemGroup>     <TargetName="Rebuild">     <!--Take the database offline to disconnect any users. Requires that the current user is an admin of the sql server machine.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd Variables="@(Variable)" Database="$(DatabaseName)" TaskAction="Execute" CommandLineQuery ="ALTER DATABASE $(DatabaseName) SET OFFLINE WITH ROLLBACK IMMEDIATE"/>         <!--Bring it back online.  If you don't, the database files won't be deleted.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="SetOnline" DatabaseItem="$(DatabaseName)"/>     <!--Delete the database, removing the existing files.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Delete" DatabaseItem="$(DatabaseName)"/>     <!--Create the new database in the default database path location.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Create" DatabaseItem="$(DatabaseName)" Force="True"/>         <!--Create admin user-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" InputFiles="$(RebuildDirectory)\0002 Create Admin User.sql" Variables="@(Variable)" />     <!--Create the dbdeploy changelog.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(RebuildDirectory)\0003 Create Changelog.sql" Variables="@(Variable)" />     <CallTarget Targets="Update;ImportData"/>     </Target>    <TargetName="Update" DependsOnTargets="CreateUpdateScript">     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(OutputFileName)" Variables="@(Variable)" />   </Target>   <TargetName="CreateUpdateScript">     <ktsc.Build.DBDeploy DbType="mssql"                                        DbConnection="User=$(User);Password=$(Password);Data Source=$(Server);Initial Catalog=$(DatabaseName);"                                        Dir="$(ScriptDirectory)"                                        OutputFile="..\$(OutputFileName)"                                        UndoOutputFile="..\$(UndoFileName)"                                        LastChangeToApply="$(LastChangeToApply)"/>   </Target>     <TargetName="ImportData">     <ItemGroup>       <TestData Include="$(TestDataDirectory)\*.dat"/>     </ItemGroup>     <ExecCommand="$(BCP) $(DatabaseName).dbo.%(TestData.Filename) in&quot;%(TestData.Identity)&quot;$(BCPOptions)"/>   </Target> </Project> Technorati Tags: MSBuild

    Read the article

  • Configuration Data in a Custom Timer job in Sharepoint 2010 : The Hierarchical Object Store

    - by Gino Abraham
    I was planning for a custom timer job for which i wanted to store some configuration data. Was looking for some best practices, found a useful links on The Hierarchical Object store Store http://www.chaholl.com/archive/2011/01/30/the-skinny-on-sppersistedobject-and-the-hierarchical-object-store-in.aspxInitially was planning for a custom list, but this would make us run a cross site query and the list name and the url should again be kept in some configuration which is an headache to maintain. Hierarchical object store was zeroed in and thanks to google for the same :)

    Read the article

  • Oracle OpenWorld Call for MDM Papers

    - by david.butler(at)oracle.com
    As the MDM Track owner, I would like to invite everyone to respond to the Oracle OpenWorld (October 2-6, Moscone Center, San Francisco) Call for Papers (https://oracleus.wingateweb.com/portal/cfp/ ). The Call for Papers is open now through Sunday, March 27. This is an outstanding opportunity for organizations familiar with MDM to tell their story to a very large, knowledgeable and intensely interested community. Opportunities for feedback and networking abound.  I would love to see MDM papers on: business drivers; business benefits; quantified ROI stories; business process optimization; implementation styles; implementation lessons learned; using master data as a service; data governance best practices; end-to-end data quality experiences; support for SOA; Chart of Accounts issues fixed; how to leverage reference data; improving EPM and/or BI across the board; operationalizing a data warehouse; support for cloud computing; compliance success stories; architecture, scalability, and mixed workload RAC platform performance examples; industry specific value propositions (Financial Services; Retail, Telecom; Manufacturing, High Tech Manufacturing, Public Sector, Health Care, …); and line of business specific value propositions (CRM, ERP, PLM, SCM, …); etc. In fact, given that MDM positively impacts all areas of operations and analytics, there are no limits to the ideas you may have for an OpenWorld presentation. When you follow the submission process, be sure to use “Master Data Management” for either the Primary or Optional track. Add “Master Data Management” as an Optional track if you are adding MDM content to a presentation on one of the following tracks: Agile; Customer Relationship Management, Oracle E-Business Suite, Product Lifecycle Management, Siebel, Sourcing and Procurement, Supply Chain Management, or one of the 18 available industry tracks. If Cloud Computing is included, please add “Cloud Computing” as a Cross-Stream Track. And don’t forget to make “MDM” a Tag, along with Business Intelligence, Cloud, CRM, Data Integration, Data Migration, Data Warehousing, EPM, or Service-Oriented Architecture whenever your content includes these items. I will personally review each submission. I hope you all keep me very busy over the next few weeks.

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-10

    - by Bob Rhubart
    Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Series: How to Kill the Architecture Department? Part 1 | Xebia Blog Don't let the title fool you. This is not an anti-architecture post. Rather, this post, part 1 of a now four-part series, offers suggestions for preserving architecture in a form that better supports agile organizations. BPM Suite configure BAM Adapter | Peter Paul van der Beek "To have the BPM server push events to BAM – Business Activity Monitoring – we have to configure the BPM suite to use the BAM Adapter," says Peter Paul van de Beek. "The BAM Adapter is configured (like other SOA Suite and BPM Adapters) in the WebLogic Server Console." Peter Paul shows you how in this brief post. A case for not installing your own software | James Gentsch "I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for," says James Gentsch. "I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make 'well, it works here' a well forgotten phase for future software developers." Thought for the Day "I'm a strong believer in being minimalistic. Unless you actually are going to solve the general problem, don't try and put in place a framework for solving a specific one, because you don't know what that framework should look like." — Anders Hejlsberg Source: SoftwareQuotes.com`

    Read the article

  • OBIEE 11.1.1.7.131017 is Available for Oracle Business Intelligence Enterprise Edition and Exalytics

    - by Saresh
    OBIEE 11.1.1.7.131017 is Available for Oracle Business Intelligence Enterprise Edition and Exalytics You may refer Note 1595219.1 -OBIEE 11g 11.1.1.7.131017 is Available for Oracle Business Intelligence Enterprise Edition and Exalytics for more details. Additional information: You can directly apply OBIEE 11.1.1.7.131017  on the top of OBIEE 11.1.1.7.0 without applying the OBIEE 11.1.1.7.0 patch. Oracle BI EE Suite Bundle Patch 11.1.1.7.131017 introduces support for Microsoft Internet Explorer 10 Oracle BI EE Suite Bundle Patch 11.1.1.7.131017 integrates support for Oracle BI Mobile App Designer into Oracle BI Presentation Services. The Home page displays a new Create Mobile App option and a link to the Mobile Apps Library; the New menu now also includes a Mobile App option. Note that to use these new options, you must also install the Oracle BI Mobile App Designer. Oracle BI Mobile App Designer is available in a separate patch (17220944) available from My Oracle Support. Apply the Mobile App Designer patch after installing Oracle BI EE Suite Bundle Patch 11.1.1.7.131017. Note that if you have previously installed Oracle BI Mobile App Designer on the BI system, then you do not have to re-apply Mobile App Designer patch 17220944. However, you will have to make a configuration change to re-enable the new Mobile App options. Oracle BI EE Suite Bundle Patch 11.1.1.7.131017 improves functionality for exporting data from analyses, dashboards, and other Oracle BI Presentation Catalog objects into Microsoft Excel.

    Read the article

  • Siebel Webinar Series for customers and partners

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Have you got questions about the Siebel product roadmap?  Or what we are delivering in the areas of Social/Mobile/Big Data/Cloud in a Siebel project context? If yes, then you are welcome to attend the Siebel Webinar Series.  These are monthly webcasts on a variety of topics related to Siebel that are geared towards business users.  The next webinar is November 21st at 8:30 AM PST entitled “Get Social with Siebel”.  You can register here. Once registered, you can also view replays of previous webinars: · Siebel: Solving the Next Generation of Business Challenges · Expand User Experiences with Siebel Open UI · Delight Customers with Siebel Service Applications · Get Mobile with Siebel /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Demantra Partitioning and the First PK Column

    - by user702295
      We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok   to create a new index instead of changing the PK.   For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting   with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).   * Note that the first column of the new index matches the partition key.   It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column   matches the partition key.

    Read the article

  • Oracle Financial Management Analytics 11.1.2.2.300 is available

    - by THE
    (guest post by Greg) Oracle Financial Management Analytics 11.1.2.2.300 is now available for download from My Oracle Support as Patch 15921734 New Features in this release: Support for the new Oracle BI mobile HD iPad client. New Account Reconciliation Management and Financial Data Quality Management analytics Improved Hyperion Financial Management analytics and usability enhancements Enhanced Configuration Utility to support multiple products. For HFM, FCM or ARM, and FDM, we support both Oracle and Microsoft SQL Server database. Simplified Test to Production migration of OFMA. Web browsers support for Oracle Financial Management Analytics: Internet Explorer Version 9 - The Oracle Financial Management Analytics supports the Internet Explorer 9 Web browser (for both 32 and 64 bit). Firefox Version 6.x - The Oracle Financial Management Analytics supports the Firefox 6.x Web browser. Chrome Version 12.x - The Oracle Financial Management Analytics supports the Chrome 12.x Web browser. See OBIEE Certification Matrix 11.1.1.6:  http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html Oracle Financial Management Analytics Compatibility: The Oracle Financial Management Analytics supports the following product version: Oracle Hyperion Financial Data Quality Management Release 11.1.2.2.300 Oracle Financial Close Manager Release 11.1.2.2.300 Oracle Hyperion Financial Management Release 11.1.2.2.300  

    Read the article

  • The Ideal Platform for Oracle Database 12c In-Memory and in-memory Applications

    - by Michael Palmeter (Engineered Systems Product Management)
    Oracle SuperCluster, Oracle's SPARC M6 and T5 servers, Oracle Solaris, Oracle VM Server for SPARC, and Oracle Enterprise Manager have been co-engineered with Oracle Database and Oracle applications to provide maximum In-Memory performance, scalability, efficiency and reliability for the most critical and demanding enterprise deployments. The In-Memory option for the Oracle Database 12c, which has just been released, has been specifically optimized for SPARC servers running Oracle Solaris. The unique combination of Oracle's M6 32 Terabytes Big Memory Machine and Oracle Database 12c In-Memory demonstrates 2X increase in OLTP performance and 100X increase in analytics response times, allowing complex analysis of incredibly large data sets at the speed of thought. Numerous unique enhancements, including the large cache on the SPARC M6 processor, massive 32 TB of memory, uniform memory access architecture, Oracle Solaris high-performance kernel, and Oracle Database SGA optimization, result in orders of magnitude better transaction processing speeds across a range of in-memory workloads. Oracle Database 12c In-Memory The Power of Oracle SuperCluster and In-Memory Applications (Video, 3:13) Oracle’s In-Memory applications Oracle E-Business Suite In-Memory Cost Management on the Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Applications on Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Sales Advisor on the SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One Project Portfolio Management on the SuperCluster M6-32 (PDF)

    Read the article

  • Enable/Disable SSL JSSE in Weblogic Server 11g/12c

    - by Vijaya Moderator -Oracle
    Here is the flag to enable or to disable JSSE.-Dweblogic.ssl.JSSEEnabled=true|false Oracle recommends that you keep this value set to true.  Starting WLS 12c, Even if the above option is set to false , it is ignored. The changes are neither persisted nor the Mbean value is also left unmodified.... Please also be aware of the below changes in SSL implementation in 12c version 1. Certicom has been removed from WebLogic Server 12.1.1 and is no longer supported. 2. JSSE is the only SSL implementation that is supported in WebLogic Server 12.1.1. The following configuration changes have been made to be consistent with this support: The default for JSSEEnabled has been changed to true. Oracle recommends that you keep this value set to true. If JSSEEnabled is set to false, it will be ignored. That is, the MBean value will not be changed either in memory or the persisted config.xml file. WebLogic Server will continue to use JSSE, but will issue a warning. Please refer to the below doc in OTN for more info... Oracle Fusion Middleware What's New in Oracle WebLogic Server - 12c Release 1 (12.1.1)

    Read the article

  • The Top 5 Business Challenges in Financial Services. Oracle Process Accelerators as a Solution By Lance Shaw

    - by JuergenKress
    Here at Oracle, we continue to release Process Accelerators for additional solutions.  These Accelerators help achieve process excellence faster with end-to-end implementations of common business processes.  They are Ready-to-use and extensible, and include industry specific best practices. One common industry where Process Accelerators are used to speed the delivery of business process management solutions is Financial Services.  We've recently produced a whitepaper that identifies the top five business challenges in the financial services industry and outlines how adopting Oracle Process Accelerators can give a competitive edge. To get the whitepaper please visit our website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: financial services,process accelerators,Lance Shaw,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Why We Do What We Do. (Part 3 of 5 Part Series on JDE 5G Postponed)

    - by Kem Butller-Oracle
    By Lyle Ekdahl - Oracle JD Edwards Sr. VP General Manager  In the closing of part two of this 5 part blog series, I stated that in the next installment I would explore the expected results of the digital overdrive era and the impact it will have on our economy. While I have full intentions of writing on that topic, I am inspired today to write about something that is top of mind. It’s top of mind because it has come up several times recently conversations with my Oracle’s JD Edwards team members, with customers and our partners, plus I feel passionately about why I do what I do…. It is not what we do but why we do that thing that we do Do you know what you do? For the most part, I bet you could tell me what you do even if your work has changed over the years.  My real question is, “Do you get excited about what you do, and are you fulfilled? Does your work deliver a sense of purpose, a cause to work for, and something to believe in?”  Alright, I guess that was not a single question. So let me just ask, “Why?” Why are you here, right now? Why do you get up in the morning? Why do you go to work? Of course, I can’t answer those questions for you but I can share with you my POV.   For starters, there are several things that drive me. As many of you know by now, I have a somewhat competitive nature but it is not solely the thrill of winning that actually fuels me. Now don’t get me wrong, I do like winning occasionally. However winning is only a potential result of competing and is clearly not guaranteed. So why compete? Why compete in business, and particularly why in this Enterprise Software business?  Here’s why! I am fascinated by creative and building processes. It is about making or producing things, causing something to come into existence. With the right skill, imagination and determination, whether it’s art or invention; the result can deliver value and inspire. In both avocation and vocation I always gravitate towards the create/build processes.  I believe one of the skills necessary for the create/build process is not just the aptitude but also, and especially, the desire and attitude that drives one to gain a deeper understanding. The more I learn about our customers, the more I seek to understand what makes the successful and what difficult issues cause them to struggle. I like to look for the complex, non-commodity process problems where streamlined design and modern technology can provide an easy and simple solution. It is especially gratifying to see our customers use our software to increase their own ability to deliver value to the market. What an incredible network effect! I know many of you share this customer obsession as well as the create/build addiction focused on simple and elegant design. This is what I believe is at the root of our common culture.  Are JD Edwards customers on a whole different than other ERP solutions’ customers? I would argue that for the most part, yes, they are. They selected our software, and our software is different. Why? Because I believe that the create/build process will generally result in solutions that reflect who built it and their culture. And a culture of people focused on why they create/build will attract different customers than one that is based on what is built or how the solution is delivered. In the past I have referred to this idea as character of the customer, and it transcends industry, size and run rate. Now some would argue that JD Edwards has some customers who are characters. But that is for a different post. As I have told you before, the JD Edwards culture is unique, and its resulting economy is valuable and deserving of our best efforts. 

    Read the article

  • Creating SparseImages for Pivot

    - by John Conwell
    Learning how to programmatically make collections for Microsoft Live Labs Pivot has been a pretty interesting ride. There are very few examples out there, and the folks at MS Live Labs are often slow on any feedback.  But that is what Reflector is for, right? Well, I was creating these InfoCard images (similar to the Car images in the "New Cars" sample collection that that MS created for Pivot), and wanted to put a Tag Cloud into the info card.  The problem was the size of the tag cloud might vary in order for all the tags to fit into the tag cloud (often times being bigger than the info card itself).  This was because the varying word lengths and calculated font sizes. So, to fix this, I made the tag cloud its own separate image from the info card.  Then, I would create a sparse image out of the two images, where the tag cloud fit into a small section of the info card.  This would allow the user to see the info card, but then zoom into the tag cloud and see all the tags at a normal resolution.  Kind'a cool. But...I couldn't find one code example (not one!) of how to create a sparse image.  There is one page on the SeaDragon site (http://www.seadragon.com/developer/creating-content/deep-zoom-tools/) that gives over the API for creating images and collections, and it sparsely goes over how to create a sparse image, but unless you are familiar with the API already, the documentation doesn't help very much. The key is the Image.ViewportWidth and Image.ViewportOrigin properties of the image that is getting super imposed on the main image.  I'll walk through the code below.  I've setup a couple Point structs to represent the parent and sub image sizes, as well as where on the parent I want to position the sub image.  Next, create the parent image.  This is pretty straight forward.  Then I create the sub image.  Then I calculate several ratios; the height to width ratio of the sub image, the width ratio of the sub image to the parent image, the height ratio of the sub image to the parent image, then the X and Y coordinates on the parent image where I want the sub image to be placed represented as a ratio of the position to the parent image size. After all these ratios have been calculated, I use them to calculate the Image.ViewportWidth and Image.ViewportOrigin values, then pass the image objects into the SparseImageCreator and call Create. The key thing that was really missing from the API documentation page is that when setting up your sub images, everything is expressed in a ratio in relation to the main parent image.  If I had known this, it would have saved me a lot of trial and error time.  And how did I figure this out?  Reflector of course!  There is a tool called Deep Zoom Composer that came from MS Live Labs which can create a sparse image.  I just dug around the tool's code until I found the method that create sparse images.  But seriously...look at the API documentation from the SeaDragon size and look at the code below and tell me if the documentation would have helped you at all.  I don't think so!   public static void WriteDeepZoomSparseImage(string mainImagePath, string subImagePath, string destination) {     Point parentImageSize = new Point(720, 420);     Point subImageSize = new Point(490, 310);     Point subImageLocation = new Point(196, 17);     List<Image> images = new List<Image>();     //create main image     Image mainImage = new Image(mainImagePath);     mainImage.Size = parentImageSize;     images.Add(mainImage);     //create sub image     Image subImage = new Image(subImagePath);     double hwRatio = subImageSize.X/subImageSize.Y;            // height width ratio of the tag cloud     double nodeWidth = subImageSize.X/parentImageSize.X;        // sub image width to parent image width ratio     double nodeHeight = subImageSize.Y / parentImageSize.Y;    // sub image height to parent image height ratio     double nodeX = subImageLocation.X/parentImageSize.X;       //x cordinate position on parent / width of parent     double nodeY = subImageLocation.Y / parentImageSize.Y;     //y cordinate position on parent / height of parent     subImage.ViewportWidth = (nodeWidth < double.Epsilon) ? 1.0 : (1.0 / nodeWidth);     subImage.ViewportOrigin = new Point(         (nodeWidth < double.Epsilon) ? -1.0 : (-nodeX / nodeWidth),         (nodeHeight < double.Epsilon) ? -1.0 : ((-nodeY / nodeHeight) / hwRatio));     images.Add(subImage);     //create sparse image     SparseImageCreator creator = new SparseImageCreator();     creator.Create(images, destination); }

    Read the article

  • Senior Developers vs. Junior

    - by huwyss
    I like the following quote which I found on codinghorror:[As Steve points out this is one key difference between junior and senior developers:] In the old days, seeing too much code at once quite frankly exceeded my complexity threshold, and when I had to work with it I'd typically try to rewrite it or at least comment it heavily. Today, however, I just slog through it without complaining (much). When I have a specific goal in mind and a complicated piece of code to write, I spend my time making it happen rather than telling myself stories about it [in comments].

    Read the article

  • OIM 11g : Multi-thread approach for writing custom scheduled job

    - by Saravanan V S
    In this post I have shared my experience of designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs.  I have used thread pool (in particular fixed thread pool) pattern in developing the OIM schedule job. The thread pooling pattern has noted advantages compared to thread per task approach. I have listed few of the advantage here ·         Threads are reused ·         Creation and tear-down cost of thread is reduced ·         Task execution latency is reduced ·         Improved performance ·         Controlled and efficient management of memory and resources used by threads More about java thread pool http://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html The following diagram depicts the high-level architectural diagram of the schedule job that process input from a flat file to update OIM process form data using fixed thread pool approach    The custom scheduled job shared in this post is developed to meet following requirement 1)      Need to process a CSV extract that contains identity, account identifying key and list of data to be updated on an existing OIM resource account. 2)      CSV file can contain data for multiple resources configured in OIM 3)      List of attribute to update and mapping between CSV column to OIM fields may vary between resources The following are three Java class developed for this requirement (I have given only prototype of the code that explains how to use thread pools in schedule task) CustomScheduler.java - Implementation of TaskSupport class that reads and passes the parameters configured on the schedule job to Thread Executor class. package com.oracle.oim.scheduler; import java.util.HashMap; import com.oracle.oim.bo.MultiThreadDataRecon; import oracle.iam.scheduler.vo.TaskSupport; public class CustomScheduler extends TaskSupport {      public void execute(HashMap options) throws Exception {             /*  Read Schedule Job Parameters */             String param1 = (String) options.get(“Parameter1”);             .             int noOfThread = (int) options.get(“No of Threads”);             .             String paramn = (int) options.get(“ParamterN”); /* Provide all the required input configured on schedule job to Thread Pool Executor implementation class like 1) Name of the file, 2) Delimiter 3) Header Row Numer 4) Line Escape character 5) Config and resource map lookup 6) No the thread to create */ new MultiThreadDataRecon(all_required_parameters, noOfThreads).reconcile();       }       public HashMap getAttributes() { return null; }       public void setAttributes() {       } } MultiThreadDataRecon.java – Helper class that reads data from input file, initialize the thread executor and builds the task queue. package com.oracle.oim.bo; import <required file IO classes>; import  <required java.util classes>; import  <required OIM API classes>; import <csv reader api>; public class MultiThreadDataRecon {  private int noOfThreads;  private ExecutorService threadExecutor = null;  public MetaDataRecon(<required params>, int noOfThreads)  {       //Store parameters locally       .       .       this.noOfThread = noOfThread;  }        /**        *  Initialize         */  private void init() throws Exception {       try {             // Initialize CSV file reader API objects             // Initialize OIM API objects             /* Initialize Fixed Thread Pool Executor class if no of threads                 configured is more than 1 */             if (noOfThreads > 1) {                   threadExecutor = Executors.newFixedThreadPool(noOfThreads);             } else {                   threadExecutor = Executors.newSingleThreadExecutor();             }             /* Initialize TaskProcess clas s which will be executing task                 from the Queue */                TaskProcessor.initializeConfig(params);       } catch (***Exception e) {                   // TO DO       }  }       /**        *  Method to reconcile data from CSV to OIM        */ public void reconcile() throws Exception {        try {             init();             while(<csv file has line>){                   processRow(line);             }             /* Initiate thread shutdown */             threadExecutor.shutdown();             while (!threadExecutor.isTerminated()) {                 // Wait for all task to complete.             }            } catch (Exception e) {                   // TO DO            } finally {                   try {                         //Close all the file handles                   } catch (IOException e) {                         //TO DO                   }             }       }       /**        * Method to process         */       private void processRow(String row) {             // Create task processor instance with the row data              // Following code push the task to work queue and wait for next                available thread to execute             threadExecutor.execute(new TaskProcessor(rowData));       } } TaskProcessor.java – Implementation of “Runnable” interface that executes the required business logic to update data in OIM. package com.oracle.oim.bo; import <required APIs> class TaskProcessor implements Runnable {       //Initialize required member variables       /**        * Constructor        */       public TaskProcessor(<row data>) {             // Initialize and parse csv row       }       /*       *  Method to initialize required object for task execution       */       public static void initializeConfig(<params>) {             // Process param and initialize the required configs and object       }           /*        * (non-Javadoc)        *         * @see java.lang.Runnable#run()        */            public void run() {             if (<is csv data valid>){                   processData();             }       }  /**   * Process the the received CSV input   */  private void processData() {     try{       //Find the user in OIM using the identity matching key value from CSV       // Find the account to be update from user’s account based on account identifying key on CSV       // Update the account with data from CSV       }catch(***Exception e){           //TO DO       }   } }

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >