Search Results

Search found 2826 results on 114 pages for 'dirty flow'.

Page 1/114 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Start a Mapping or Process Flow from OWB Browser

    - by Dong Ruirong
    Basically, we start a Mapping or Process Flow from Oracle Warehouse Builder (OWB) Design Client. But actually we can also start a Mapping or Process Flow from OWB Browser. This paper will introduce the Start Report first and then introduce how to start/rerun a Mapping or Process Flow from OWB Browser. Start Report Start Report is used to start an execution of a Mapping or Process Flow. So there are two kinds of Start Report: Mapping Start Report (See Figure 1) and Process Flow Start Report (See Figure 2). Start Report shows the Mapping or Process Flow identification properties, including latest deployment and latest execution, lists all execution parameters for the Mapping or Process Flow, which were specified by the latest deployment, and assigns parameter default values from the latest deployment specification. You can do a couple of things from Start Report: Sort execution parameters on name, category. Table 1 lists all parameters of a Mapping. Table 2 lists all parameters of a Process Flow. Change values of any input parameter where permitted. For some parameters, selection lists are provided. For example, Mapping’s parameter Audit Level has a selection list. Reset all parameter settings to their default values. Apply basic validation to parameter values before starting an execution. Start the Mapping or Process Flow, which means it is executed immediately. Navigate to Deployment Report for latest deployment details of the Mapping or Process Flow. Navigate to Execution Job Report for latest execution of current Mapping or Process Flow Link to on-link help Warehouse Report Page, Deployment Report, Execution Report, Execution Schedule Report and Execution Summary Report. Figure 1 Mapping Start Report Table 1 Execution Parameters and default values for a Mapping Category Name Mode Input Value System Audit Level In Error Details System Bulk Size In 1000 System Commit Frequency In 1000 System EXECUTE_RESUME_TASK In FALSE System FORCE_RESUME_OPTION In FALSE System Max No of Errors In 50 System NUMBER_OF_TIMES_TO_RETRY In 2 System Operating Mode In Set Based Fail Over to Row Based System PARALLEL_LEVEL In 0 System Procedure Name In main System Purge Group In WB Figure 2 Process Flow Start Report Table 2 Execution Parameters and default values for a Process Flow Category Name Mode Input Value System EVAL_LOCATION In   System Item Key In-Out   System Item Type In PFPKG_1 Start a Mapping or Process Flow To navigate to Start Report, it’s better to login OWB Browser with Control Center option; if not, after logging in OWB Browser, go to Control Center first. Then you can follow the ways introduced in this section to navigate to Start Report. One more thing you need to pay attention to is that you are not allowed to deploy any Mappings and Process Flows from OWB Browser as it’s not supported. So it’s necessary to deploy the Mappings and Process Flows first before starting them from OWB Browser. If you have deployed a Mapping or Process Flow but have not started it, please navigate from Object Summary Report or Deployment Schedule Report to Start Report. 1. Navigating from Object Summary Report to Start Report Open the Object Summary Report to see all deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from Deployment Schedule Report to Start Report Open the Deployment Schedule Report to see deployment details of Mapping and Process Flow. Expand the project trees to find the deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. Re-run a Mapping or Process Flow If you have executed a Mapping or Process Flow, you can navigate from Object Summary Report, Deployment Schedule Report, Execution Summary Report or Execution Schedule Report to Start Report. 1. Navigating from the Execution Summary Report to Start Report Open the Execution Summary Report to see all execution jobs including Mapping jobs and Process Flow jobs. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from the Execution Schedule Report to Start Report Open the Execution Schedule Report to see list of all executions of Mapping and Process Flow. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. If the execution of a Mapping or Process Flow is successful, you will see this message from the Start Report: Start Execution request successful. (See Figure 3) Figure 3 Execution Result You can also confirm the execution of the Mapping or Process Flow by referring to Execution Report of the current Mapping or Process Flow by clicking the link in the Available Reports tab for the given Mapping or Process Flow. One new record of execution job details is added to Execution Report of the Mapping or Process Flow which shows the details of the execution such as Start Time, Elapsed Time, Status, the number of records selected, inserted, updated, deleted etc.

    Read the article

  • Drools flow architecture, Drools flow events with AND join nodes

    - by Shoukry K
    I have been evaluating a number of frameworks including jBPM and Drools flow for my application requirements. Lots of the opinions seem to be inclined towards Drools flow as its more flexible, knowledge oriented, easier to integrate with business rules, etc.. The application is some sort of an Email Campaign manager , where different customers can sign in, prepare (design) and launch email campaigns. The application should be able to do the following : 1- Receive a list of email addresses, send emails to each of these addresses starting from a certain date and during a certain time interval of the day , do some custom actions, and then wait for reply emails. 2- If a reply email is received and depending on the response text of the email , and depending on the time the email was received certain actions need to happen, web service calls need to take place, and error handling for these calls. 3- The application will manage and run many and different campaigns (different customers and different flows for each customer) at any point of time. The first question is : Is Drools flow the way to go about this? My main concerns are scalability, suspending, resuming flows, and long wait, and flows management. As you see from the requirements : There is a scheduling part : Certain flows need to be run at a certain point in time, they need to get suspended and then resumed. For example start sending emails starting on Dec 1st 2010 and send emails only in the time interval between 08:00 and 17:00 GMT. By then it might be that all subscribers have been sent emails, but it might not be the case, the process needs to (resume) on Dec 2nd and send a second batch, however certain (users) already received emails and they should be able to (continue at different stages of the flow) There are long wait states : Days or even weeks , i need to persist, suspend / resume and terminate flows (manage flows) External Events : This is where i got stuck first, i tried to put together a simple flow (see attached screenshot) See image http://img46.imageshack.us/img46/9620/workflowwithevents.png , there is a start node , connected to an action node, connected to a join. An event node is connected to a second action node, which is connected to the join. The join is an AND join , after the join there is an action and the end node. Here is the sample code i am using to launch the flow : KnowledgeBuilder builder = KnowledgeBuilderFactory .newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource("campaign.rf", CampaignsDroolsPoc.class), ResourceType.DRF); if (builder.hasErrors()) { KnowledgeBuilderErrors errors = builder.getErrors(); Iterator<KnowledgeBuilderError> iterator = errors.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next().toString()); } } KnowledgeBase base = KnowledgeBaseFactory.newKnowledgeBase(); base.addKnowledgePackages(builder.getKnowledgePackages()); final StatefulKnowledgeSession ksession = base .newStatefulKnowledgeSession(); // KnowledgeRuntimeLoggerFactory.newConsoleLogger(ksession); ksession.getWorkItemManager().registerWorkItemHandler("Log", new SendSMSWorkItemHandler()); ProcessInstance startProcess = ksession.startProcess("flow"); System.out.println("Signaling event"); startProcess.signalEvent("ev1", "ev1"); System.out.println("Signaled"); ksession.fireUntilHalt(); I am noticing that the event get triggered, the action node connected to the event gets triggered, however things seem to get stuck at the join. The flow does not continue past the AND join and the flow seems to get stuck. The action following the node does not get triggered. I also went through the drools flow documentation , and all the example codes, however i didn't find anything there. In addition any hints about the way to go about architecting the solution, and implementing it would be great.

    Read the article

  • Data Flow Diagrams - Difference between Lines and Arrows

    - by Howdy_McGee
    I'm currently working with Visio to create Data Flow Diagrams for a System Analysis and Design class but I'm unsure what the difference between ------ and ------> is. I can connect 2 shapes together with a line (process, entity, data store) but does the single line connecting the two mean data flow? Do I need to explicitly use the data flow arrow to show which way data is flowing? (There doesn't seem to be tags for this topic, maybe im in the wrong place?)

    Read the article

  • Extending SSIS with custom Data Flow components (Presentation)

    Download the slides and sample code from my Extending SSIS with custom Data Flow components presentation, first presented at the SQLBits II (The SQL) Community Conference. Abstract Get some real-world insights into developing data flow components for SSIS. This starts with an introduction to the data flow pipeline engine, and explains the real differences between adapters and the three sub-types of transformation. Understanding how the different types of component behave and manage data is key to writing components of your own, and probably should but be required knowledge for anyone building packages at all. Using sample code throughout, I will show you how to write components, as well as highlighting best practice and lessons learned. The sample code includes fully working example projects for source, destination and transformation components. Presentation & Samples (358KB) Extending SSIS with custom Data Flow components.zip

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

  • WebCenter Spaces 11g PS2 Task Flow Customization

    - by Javier Ductor
    Previously, I wrote about Spaces Template Customization. In order to adapt Spaces to customers prototype, it was necessary to change template and skin, as well as the members task flow. In this entry, I describe how to customize this task flow.Default members portlet:Prototype Members Portlet:First thing to do, I downloaded SpacesTaskflowCustomizationApplication with its guide.This application allows developers to modify task flows in Spaces, such as Announcements, Discussions, Events, Members, etc. Before starting, some configuration is needed in jDeveloper, like changing role to 'Customization Developer' mode, although it is explained in the application guide. It is important to know that the way task flows are modified is through libraries, and they cannot be updated directly in the source code like templates, you must use the Structure panel for this. Steps to customize Members portlet:1. There are two members views: showIconicView and showListView. By default it is set to Iconic view, but in my case I preferred the View list, so I updated in table-of-members-taskflow.xml this default value.2. Change the TableOfMembers-ListView.jspx file. By editing this file, you can control the way this task flow is displayed. So I customized this list view using the structure panel to get the desired look&feel.3. After changes are made, click save all, because every time a library changes an xml file is generated with all modifications listed, and they must be saved.4. Rebuild project and deploy application.5. Open WLST command window and import this customization to MDS repository with the 'import' command.Eventually, this was the result:Other task flows can be customized in a similar way.

    Read the article

  • Remote Task Flow vs. WSRP Portlets

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A remote task flow is bounded task flow that is deployed as a stand-alone Java EE application on a remote server with its URL Invoke property set to url-invoke-allowed. The remote task flow is accessed either from a direct browser GET request or, when called from another ADF application, through the task flow call activity. For more information about how to invoke remote task flows from a task flow call activity see chapter 15.6.4 How to Call a Bounded Task Flow Using a URL of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework at http://docs.oracle.com/cd/E23943_01/web.1111/b31974/taskflows_activities.htm#CHDJDJEF Compared to WRSP portlets, remote task flows in Oracle JDeveloper 11g R1 and R2 have a functional limitation in that they cannot be embedded as a region on a page but require the calling ADF application to navigate off to another application and page. The difference between a remote task flow call using the task flow call activity and a simple redirect to a remote Java EE application is that the remote task flow has a state token attached that allows to restore the state of the calling application upon task flow return. A use case for a remote task flow call activity is a "yellow page lookup" scenario in which different ADF applications use an remote task flow to lookup people, products or similar to return a selected value to the calling application. Note that remote task flow calls need to be performed from a bounded or unbounded top level task flow of the calling application. If called from a region (using the parent call activity) in a page, the region state is not recovered upon task flow return. ADF developers recently have identified remote task flows as an architecture pattern to partition their ADF applications into independently deployed Java EE applications. While this sounds like a desirable use of the remote task flow feature, it is not possible to achieve for as long as remote task flows don't render as an ADF region.

    Read the article

  • Using Exception Handler in an ADF Task Flow

    - by anmprs
    Problem Statement: Exception thrown in a task flow gets wrapped in an exception that gives an unintelligible error message to the user. Figure 1 Solution 1. Over-writing the error message with a user-friendly error message. Figure 2 Steps to code 1. Generating an exception: Write a method that throws an exception and drop it in the task flow.2. Adding an Exception Handler: Write a method (example below) to overwrite the Error in the bean or data control and drop the method in the task flow. Figure 3 This method is marked as the Exception Handler by Right-Click on method > Mark Activity> Exception Handler or by the button that is displayed in this screenshot Figure 4 The Final task flow should look like this. This will overwrite the exception with the error message in figure 2. Note: There is no need for a control flow between the two method calls (as shown below). Figure 5 Solution 2: Re-Routing the task flow to display an error page Figure 6 Steps to code 1. This is the same as step 1 of solution 1.2. Adding an Exception Handler: The Exception handler is not always a method; in this case it is implemented on a task flow return.  The task flow looks like this. Figure 7 In the figure below you will notice that the task flow return points to a control flow ‘error’ in the calling task flow. Figure 8 This control flow in turn goes to a view ‘error.jsff’ which contains the error message that one wishes to display.  This can be seen in the figure below. (‘withErrorHandling’ is a  call to the task flow in figure 7) Figure 9

    Read the article

  • Task Flow Design Paper Revised

    - by Duncan Mills
    Thanks to some discussion over at the ADF Methodology Group and contributions from Simon Lessard and Jan Vervecken I have been able to make some refinements to the Task Flow Design Fundamentals paper on OTN.As a bonus, whilst I was making some edits anyway I've included some of Frank Nimphius's memory scope diagrams which are a really useful tool for understanding how request, view, backingBean and pageFlow scopes all fit together.

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • No flow definition found. Spring web flow

    - by user184794
    Hi, I am new to Spring webflow and now I am trying the example in Spring recipes book and I know this is a basic question. I am getting the error as follows, org.springframework.webflow.definition.registry.NoSuchFlowDefinitionException: No flow definition '${flowExecutionUrl}&_eventId=next' found at org.springframework.webflow.definition.registry.FlowDefinitionRegistryImpl.getFlowDefinitionHolder(FlowDefinitionRegistryImpl.java:126) at org.springframework.webflow.definition.registry.FlowDefinitionRegistryImpl.getFlowDefinition(FlowDefinitionRegistryImpl.java:61) at org.springframework.webflow.executor.FlowExecutorImpl.launchExecution(FlowExecutorImpl.java:138) at org.springframework.webflow.mvc.servlet.FlowHandlerAdapter.handle(FlowHandlerAdapter.java:193).... Shown below is my configurations, <bean name="flowController" class="org.springframework.webflow.mvc.servlet.FlowController"> <property name="flowExecutor" ref="flowExecutor"></property> </bean> <webflow:flow-executor id="flowExecutor" /> <webflow:flow-registry id="flowRegistry" > <webflow:flow-location path="/WEB-INF/flows/welcome/welcome.xml"></webflow:flow-location> </webflow:flow-registry> /WEB-INF/flows/welcome/welcome.xml, <view-state id="welcome"> <transition on="next" to="introduction" /> <transition on="skip" to="menu" /> </view-state> <view-state id="introduction"> <on-render> <evaluate expression="libraryService.getHolidays()" result="requestScope.holidays" /> </on-render> <transition on="next" to="menu" /> </view-state> <view-state id="menu"></view-state> In welcome.jsp, <a href="${flowExecutionUrl}&_eventId=next">Next</a> <a href="${flowExecutionUrl}&_eventId=skip">Skip</a> Please let me know what is going wrong. I am using 2.0.9 Release. Thanks in advance, SD

    Read the article

  • ADF EMG Task Flow Tester Now Available!

    - by Steven Davelaar
    Testing ADF applications has become much easier as of today. At the ADF EMG day at Oracle Open World a new tool was announced, the ADF EMG Task Flow Tester.  The ADF EMG Task Flow Tester is a web-based testing tool for ADF bounded task flows. It supports testing of task flows that use pages as well as task flows using page fragments. A sophisticated mechanism to specify task flow input parameters is provided. A set of task flow input parameters and run options can be saved as a task flow testcase. Task flows and their testcases can be exported to XML and imported from XML.      This ADF EMG task Flow Tester can help you in a number of ways: It allows you to unit test your task flows in complete isolation, ruling out dependencies with other task flows when finding and investigating issues. It allows you to quickly test various combinations of task flow input parameter without redeploying the application It keeps your application cleaner (and saves time) as you no longer need to create separate test pages for each and every bounded task flow with page fragments that you used to create before. You can use the tester to simulate a call to your task flow so you can easily test task flow return values and the return navigation outcome. The tool is easy to install as a JDeveloper extension, and easy to use. Check out the Getting Started section in the User Guide and you will be up and running in 5 minutes! Your feedback is most welcome, if you run into issues or have enhancement requests, then check out this page.

    Read the article

  • How to obtain flow while pair programming in agile development?

    - by bizso09
    Flow is is concept introduced by Mihaly Csikszentmihalyi In short, it means what most to get into the "zone". You feel immeresed in the task you are doing, you are in deep focus and concentration and the task difficulty is just right for you, but challenging at the same time. When people acquire flow their prodctivity shoots up. Programming requires great deal of mental focus and programmers need to juggle several things in their mind at once. Many like to work in a quite environment where they can direct their full attention to the task. If they are interreupted, it may take several minutes, sometimes hours to get back into flow. I understand that agile way of doing software development is called pair prograaming. This is pormoted in Extreme programming too. It means you put the whole software development team in one room so that communication is seamless. You do programming with your pair because this way you get instant code reviews and fix bugs sooner. However, I alwys had problem obtaining flow while doing pair programming because of the contant stream of interrupts. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is all lost. How can you obtain and keep flow while doing agile pair programming?

    Read the article

  • How can you achieve and maintain flow while pair programming?

    - by bizso09
    Flow is a concept introduced by Mihaly Csikszentmihalyi; in short, it means to get into the "zone". You feel immersed in your task, focused; the task can be difficult but challenging at the same time. When people achieve flow their productivity shoots up. Programming requires a great deal of mental focus because we often need to juggle several things in our minds at once. Many like to work in a quiet environment where they can direct their full attention to the task. If they are interrupted, it may take several minutes or even hours to get back into flow. I understand there's a practice in agile development and extreme programming called pair programming. It means you put the whole software development team in one room so that communication is seamless. You do write code with your pair because this way you get instant code reviews and fewer bugs get through. I've always had problems achieving flow while doing pair programming because of constant interruptions. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is lost. How can you achieve and maintain flow while pair programming?

    Read the article

  • Should I enabled 802.3x hardware flow control?

    - by Stu Thompson
    What is the conventional wisdom regarding 802.3x flow control? I'm setting up a network at a new colo and am wondering if I should be enabling it or not. My oh-cool-a-bright-and-shiny-new-toy self wants to enable it, but this seems like one of those decisions that could blow up in my face later on. My network: An HP ProCurve 2510G-24 switch A pair of Debian 5 HP DL380 G5's with built-in NC373i 2-port NIC LACP'd as one link. 9000 jumbo frames enabled. (Application) A pair of hand-built Ubuntu server with 4-port Intel Pro/1000 LACP'd as one link. 9000 jumbo frames enabled. (NAS) A few other servers with with single 1Gbps ports, but one with 100Mbps. Most of this kit is 802.3x. I've been enabling it as I go along, and am about to test the network. But as my 'go live' day nears, I am worried about the 802.3x decision as I've never explicitly used it before. Also, I've read some 10-year old articles out there on the Intertubes that warn against using flow control. Should I be enabling 802.3x hardware flow control?

    Read the article

  • Tab control in Silverlight 3.0 and Dirty data

    - by Vinayak Bhosale
    We are using tab control in our project. While using this control i came across a few issues like - When the tab control loads, it invokes constructor of all the xaml pages that form the individual tabs. Can this be avoided? Is there any event with tab control that we can use to identify dirty data on the previous tab that i may have visited. I mean can i prevent user from navigating to some other tab before saving the changes on current tab.

    Read the article

  • Five hours of Task Flow Overview Recordings Available

    - by Frank Nimphius
    In addition to the ADF Controller task flow documentation in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 http://download.oracle.com/docs/cd/E21764_01/web.1111/b31974/partpage3.htm#BABHIIAI The ADF Insider website … http://www.oracle.com/technetwork/developer-tools/adf/learnmore/adfinsider-093342.html … hosts five online videos that explain how to build and work with ADF Controller task flows in Oracle ADF. ADF Task Flow - Overview (Part 1) This 90 minute recording introduces the concept of ADF unbounded and bounded task flows, as well as other ADF Controller features. The session starts with an overview of unbounded task flows, bounded task flows and the different activities that exist for developers to build complex application flows. Exception handling and the Train navigation model is also covered in this first part of a two part series. By example of developing a sample application, the recording guides viewers through building unbounded and bounded task flows. This session is continued in a second part. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p1/taskflow-overview-p1.html ADF Task Flow - Overview (Part 2) This 75 minute session continues where part 1 ended and completes the sample application that guides viewers through different aspects of unbounded and bounded task flow development. In this recording, memory scopes, save for later, task flow opening in dialogs and remote task flow calls are explained and demonstrated. If you are new to ADF Task Flow, then it is recommended to first watch part 1 of this series to be able to follow the explanation guided by the sample application. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p2/taskflow-overview-p2.html ADF Region Interaction - An Overview This session covers most of the options that exist for communicating between regions. It briefly discusses what it takes to build regions from bounded task flows before going into details using slides and samples. The following interaction is explained: contextual events, queue action in region, input parameters and PPR, drag and drop, shared Data Controls, parent action and region navigation listener. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-region-interaction/adf-region-interaction.html ADF Region Interaction - Contextual Events Contextual event is used as a communication channel between a parent view and its contained regions, as well as between regions. By example, this session explains how to set up contextual events, how to define producers and event listeners and how to define the payload message. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfInsiderContextualEvents/AdfInsiderContextualEvents.html

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • Having problems with sqlDataReader

    - by Anthony
    I am using a sqlDataReader to get data and set it to session variables. The problem is it doesn't want to work with expressions. I can reference any other column in the table, but not the expressions. The SQL does work. The code is below. Thanks in advance, Anthony Using myConnectionCheck As New SqlConnection(myConnectionString) Dim myCommandCheck As New SqlCommand() myCommandCheck.Connection = myConnectionCheck myCommandCheck.CommandText = "SELECT Projects.Pro_Ver, Projects.Pro_Name, Projects.TL_Num, Projects.LP_Num, Projects.Dev_Num, Projects.Val_Num, Projects.Completed, Flow.Initiate_Date, Flow.Requirements, Flow.Req_Date, Flow.Dev_Review, Flow.Dev_Review_Date, Flow.Interface, Flow.Interface_Date, Flow.Approval, Flow.Approval_Date, Flow.Test_Plan, Flow.Test_Plan_Date, Flow.Dev_Start, Flow.Dev_Start_Date, Flow.Val_Start, Flow.Val_Start_Date, Flow.Val_Complete, Flow.Val_Complete_Date, Flow.Stage_Production, Flow.Stage_Production_Date, Flow.MKS, Flow.MKS_Date, Flow.DIET, Flow.DIET_Date, Flow.Closed, Flow.Closed_Date, Flow.Dev_End, Flow.Dev_End_Date, Users_1.Email AS Expr1, Users_2.Email AS Expr2, Users_3.Email AS Expr3, Users_4.Email AS Expr4, Users_4.FNAME, Users_3.FNAME AS Expr5, Users_2.FNAME AS Expr6, Users_1.FNAME AS Expr7 FROM Projects INNER JOIN Users AS Users_1 ON Projects.TL_Num = Users_1.PIN INNER JOIN Users AS Users_2 ON Projects.LP_Num = Users_2.PIN INNER JOIN Users AS Users_3 ON Projects.Dev_Num = Users_3.PIN INNER JOIN Users AS Users_4 ON Projects.Val_Num = Users_4.PIN INNER JOIN Flow ON Projects.id = Flow.Flow_Pro_Num WHERE id = " myCommandCheck.CommandText += QSid myConnectionCheck.Open() myCommandCheck.ExecuteNonQuery() Dim count As Int16 = myCommandCheck.ExecuteScalar If count = 1 Then Dim myDataReader As SqlDataReader myDataReader = myCommandCheck.ExecuteReader() While myDataReader.Read() Session("TL_email") = myDataReader("Expr1").ToString() Session("PE_email") = myDataReader("Expr2").ToString() Session("DEV_email") = myDataReader("Expr3").ToString() Session("VAL_email") = myDataReader("Expr4").ToString() Session("Project_Name") = myDataReader("Pro_Name").ToString() End While myDataReader.Close() End If End Using

    Read the article

  • How to build a data flow?

    - by salvationishere
    I am running Visual Studio 2008, the SSIS Tutorial described on: http://msdn.microsoft.com/en-us/library/ms167106.aspx I finished all of the tasks but am getting following errors: Error 1 Validation error. Extract Sample Currency Data: Extract Sample Currency Data: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Error 2 Validation error. Extract Sample Currency Data SSIS.Pipeline: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Can you tell what I need to do to make this build without errors?

    Read the article

  • SSIS Snack: Data Flow Source Adapters

    - by andyleonard
    Introduction Configuring a Source Adapter in a Data Flow Task couples (binds) the Data Flow to an external schema. This has implications for dynamic data loads. "Why Can't I...?" I'm often asked a question similar to the following: "I have 17 flat files with different schemas that I want to load to the same destination database - how many Data Flow Tasks do I need?" I reply "17 different schemas? That's easy, you need 17 Data Flow Tasks." In his book Microsoft SQL Server 2005 Integration Services...(read more)

    Read the article

  • Building a control-flow graph from an AST with a visitor pattern using Java

    - by omegatai
    Hi guys, I'm trying to figure out how to implement my LEParserCfgVisitor class as to build a control-flow graph from an Abstract-Syntax-Tree already generated with JavaCC. I know there are tools that already exist, but I'm trying to do it in preparation for my Compilers final. I know I need to have a data structure that keeps the graph in memory, and I want to be able to keep attributes like IN, OUT, GEN, KILL in each node as to be able to do a control-flow analysis later on. My main problem is that I haven't figured out how to connect the different blocks together, as to have the right edge between each blocks depending on their nature: branch, loops, etc. In other words, I haven't found an explicit algorithm that could help me build my visitor. Here is my empty Visitor. You can see it works on basic langage expressions, like if, while and basic operations (+,-,x,^,...) public class LEParserCfgVisitor implements LEParserVisitor { public Object visit(SimpleNode node, Object data) { return data; } public Object visit(ASTProgram node, Object data) { data = node.childrenAccept(this, data); return data; } public Object visit(ASTBlock node, Object data) { } public Object visit(ASTStmt node, Object data) { } public Object visit(ASTAssignStmt node, Object data) { } public Object visit(ASTIOStmt node, Object data) { } public Object visit(ASTIfStmt node, Object data) { } public Object visit(ASTWhileStmt node, Object data) { } public Object visit(ASTExpr node, Object data) { } public Object visit(ASTAddExpr node, Object data) { } public Object visit(ASTFactExpr node, Object data) { } public Object visit(ASTMultExpr node, Object data) { } public Object visit(ASTPowerExpr node, Object data) { } public Object visit(ASTUnaryExpr node, Object data) { } public Object visit(ASTBasicExpr node, Object data) { } public Object visit(ASTFctExpr node, Object data) { } public Object visit(ASTRealValue node, Object data) { } public Object visit(ASTIntValue node, Object data) { } public Object visit(ASTIdentifier node, Object data) { } } Can anyone give me a hand? Thanks!

    Read the article

  • gwt panel flow panel

    - by msaif
    i like to design entire html with GWT. but when i press ctrl and + then entire html must be zoomed from center not from upper left corner. then what type of panel should i use? flow panel , stack panel i dont know.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >