Search Results

Search found 19802 results on 793 pages for 'linq entity framework'.

Page 676/793 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • ATG Live Webcast November 2nd: Using Oracle ADF with Oracle E-Business Suite

    - by Bill Sawyer
    After a break for Oracle Open World 2012, the ATG Live Webcast series is restarting this Friday, November 2nd, 2012 with the webcast: Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Oracle E-Business Suite delivers functionality for handling the core business of your organization. This webcast provides an overview of how to use Oracle Application Development Framework (Oracle ADF) to deliver alternative user interfaces to existing Oracle E-Business Suite processes. The webcast also explores integration between the two worlds using the Oracle E-Business Suite SDK for Java. Date:               Friday, November 2, 2012Time:              8:00 AM - 9:00 PM (NOON) Pacific Standard TimePresenters:   Sara Woodhull, Principal Product Manager, E-Business Suite ATG                         Juan Camilo Ruiz, Principal Product Manager, ADF Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103190To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  590254265 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Oracle OpenWorld Series: All Things Mobile

    - by Michelle Kimihira
    I caught up with Joe Huang, Senior Principal Product Manager, Mobile Application Development Framework to hear about his recommendations for Oracle OpenWorld. Use this Focus On document, which provides a roadmap to must-attend sessions and demos. By Joe Huang This year’s OpenWorld promises to be “THE” event for anyone interested in mobile enterprise applications.  Although Oracle has had a rich portfolio of mobile products for many years now, there is a much stronger focus on mobile this year.  Every single one of our customers is looking to develop a mobile strategy and bring key business processes to mobile users, and as you will see in the various keynotes, sessions, and demos during OpenWorld, Oracle is clearly the leader in mobile technologies and applications. Look for mobile development technologies being demonstrated in the Oracle Red Lounge located at Moscone North Upper Lobby, where innovative technologies from Oracle are being showcased.  A few select sessions where mobile development technologies will be highlighted: Monday, 10/1 10:45 AM – 11:45 AM GEN9398: The Future Development for Oracle Fusion – From Desktop to Mobile to Cloud See the latest and greatest in Oracle development technologies.  A key customer will be demonstrating the application they built using beta version of ADF Mobile. Marriott Marquis, Salon 8 Monday, 10/1 1:45 PM – 2:45 PM GEN11554: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies – See how to leverage Oracle’s development technology like ADF Mobile to mobilize Oracle applications. Moscone West, 3002/3004 Monday, 10/1 4:45 PM – 5:45 PM GEN11451: Building a Mobile Applications with Oracle Cloud See how Oracle offers a simpler way of developing and deploying cross-device mobile applications, enabling you to access applications, data and services from mobile channels in an easier way. Moscone West, 2002/2004 Tuesday, 10/2 11:45 AM – 12:45 PM CON3824: Mobile-Enabled Oracle Fusion Middleware and Enterprise Applications with Oracle ADF See how Oracle Fusion Middleware and ADF Mobile together delivers a complete and powerful platform for enterprise mobile applications.  A key customer will also be demonstrating a application built using ADF Mobile beta, that extends Oracle application to mobile devices. Moscone South, 306 Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications, Amit Zavery’s General Session, Hassan Rizvi's General Session, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications,  Mobile ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • What to do if I am working on a language that I don't like

    - by Sayem Ahmed
    Hi there, I really don't know if this is the right place to ask this question, but if it isn't, then I guess someone will notify. Anyway, I am working in a software development farm which is currently using PowerBuilder to develop a mid-size ERP solution. The work environment and company management are so great that it may be the best in the whole Bangladesh. Only problem is the technology that are currently being used, which is this PowerBuilder. Now I am a guy who tends to prefer modern development technologies, like DI containers, ORM, TDD, JQuery etc. PowerBuilder is a great tool too, but I couldn' like the application techniques used to build PB applications. These techniques are so inheritance-dependent that many a times these create a great deal of sufferings. I remember two days ago I had to change some processing logic in a core user object and as a result I had to test and re-test all the forms that the application have(apparently, there are almost 20 forms there, each of them with 3-4 kinds of functionalities). Also, learning PB is tough, because online material on this thing is very, very low. I can't afford to read all the documentation that PB provide because I have hard deadlines on the work that I have to do. Another thing with PB is that applications tend to rely on business logic that are implemented on databases which causes debugging to be a nightmare. As a result, I don't feel motivated enough to work in this IDE/System/Framework (or whatever) anymore. My productivity has greatly decreased, and I am not delivering quality code. I think I have the following options available to me - Remain in the current job, keep delivering worse code and let my productivity decrease day by day, taking salaries and bonuses but not delivering quality codes/doing my job the way I should, Search for a new job. At this point number 2 seems a good option, but there are also some issues. As I mentioned before, our management may be the best in the country. Our company owner is himself a software developer with 24 years of experience in software development. He is currently our Team Leader and System Analyst. He is by far the greatest manager and boss I have ever seen. He understands developer's mentality very well(as he IS himself a developer). He is also a great, kind and generous guy. Our company is only a start-up company with 10 developers. Among them, only 3-4 people knows about the business logic behind the ERP, and I am one of them. If I switch my current job, it may hamper the development of this product which I really don't want. I couldn't decide what to do in this situation, so I turned to the community for advice.

    Read the article

  • Go for the Deep Dive on Oracle Products and Technology

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Oracle University gives you more learning for your conference investment. It’s easier than ever before to get in-depth Oracle product and technology training if you’re attending any of the Oracle conferences this fall, including Oracle OpenWorld, the Oracle Customer Experience Summit @ OpenWorld, the Oracle PartnerNetwork Exchange @ OpenWorld, and MySQL Connect. Why is it easier? Because Oracle University preconference training takes place on Sunday, September 30 from 8:00 a.m. to 3:30 p.m. And you’re going to be in town for the conference anyway, right? The training ends early enough in the afternoon that you’ll still be able to get good seats for conference opening keynotes and get psyched for the welcome reception that follows. Each session will be taught by an expert Oracle University instructor and will be fact-packed with demos and tips to help you do more than ever before with your Oracle product and technology investment. The training sessions being offered include: Applications:·             PeopleSoft Test Framework Script Creation and Optimization·             New Integration Technologies for PeopleTools 8.52·             Oracle Fusion Applications: Security Fundamentals Database and Systems:·             Certification Exam Cram: Oracle Database 11g: New Features for Administrators·             Exadata Database Machine Administration Workshop·             Introduction to Big Data·             Using Oracle Enterprise Manager Cloud Control 12c·             Using Java - for PL/SQL and Database Developers Fusion Middleware:·             Developing Portable Java EE Applications with the Enterprise JavaBeans 3.1 API and Java Persistence API 2.0·             Developing Secure Java Web Services·             How The Latest Java EE and SOA Help in Architecting and Designing Robust Enterprise Applications·             Oracle Business Intelligence 11g: Overview to Analyses and Dashboards·             Oracle Fusion Middleware 11g: Build Applications with ADF I·             Oracle Fusion Middleware 11g Administer Forms Services·             Oracle SOA Suite 11g Administration·             WebLogic Server Administration Essentials Don’t miss this great opportunity to maximize your Oracle OpenWorld experience and investment. Learn more about Oracle University training sessions.

    Read the article

  • Where to start with game development?

    - by steven_desu
    I asked this earlier in this thread at stackoverflow.com. One of the early comments redirected me here to gamedev.stackexchange.com, so I'm reposting here. Searching for related questions I found a number of very specific questions, but I'm afraid the specifics have proved fruitless for me and after 4 hours on Google I'm no closer than I started, so I felt reaching out to a community might be in order. First, my goal: I've never made a game before, although I've muddled over the possibility several times. I decided to finally sit down and start learning how to code games, use game engines, etc. All so that one day (hopefully soon) I'll be able to make functional (albeit simple) games. I can start adding complexity later, for now I'd be glad to have a keyboard-controlled camera moving in a 3D world with no interaction beyond that. My background: I've worked in SEVERAL programming languages ranging from PHP to C++ to Java to ASM. I'm not afraid of any challenges that come with learning the new syntax or limitations inherent in a new language. All of my past programming experience, however, has been strictly non-graphical and usually with little or extremely simple interaction during execution. I've created extensive and brilliant algorithms for solving logical and mathematical problems as well as graphing problems. However in every case input was either defined in a file, passed form an HTML form, or typed into the console. Real-time interaction with the user is something with which I have no experience. My question: Where should I start in trying to make games? Better yet- where should I start in trying to create a keyboard-navigable 3D environment? In searching online I've found several resources linking to game engines, graphics engines, and physics engines. Here's a brief summary of my experiences with a few engines I tried: Unreal SDK: The tutorial videos assume that you already have in-depth knowledge of 3D modeling, graphics engines, animations, etc. The "Getting Started" page offers no formal explanation of game development but jumps into how Unreal can streamline processes it assumes you're already familiar with. After downloading the SDK and launching it to see if the tools were as intuitive as they claimed, I was greeted with about 60 buttons and a blank void for my 3D modeling. Clicking on "add volume" (to attempt to add a basic cube) I was met with a menu of 30 options. Panicking, I closed the editor. Crystal Space: The website seemed rather informative, explaining that Crystal Space was just for graphics and the companion software, CEL, provided entity logic for making games. A demo game was provided, which was built using "CELStart", their simple tool for people with no knowledge of game programming. I launched the game to see what I might look forward to creating. It froze several times, the menus were buggy, there were thousands of graphical glitches, enemies didn't respond to damage, and when I closed the game it locked up. Gave up on that engine. IrrLicht: The tutorial assumes I have Visual Studio 6.0 (I have Visual Studio 2010). Following their instructions I was unable to properly import the library into Visual Studio and unable to call any of the functions that they kept using. Manually copying header files, class files, and DLLs into my project's folder - the project failed to properly compile. Clearly I'm not off to a good start and I'm going in circles. Can someone point me in the right direction? Should I start by downloading a program like Blender and learning 3D modeling, or should I be learning how to use a graphics engine? Should I look for an all-inclusive game engine, or is it better to try and code my own game logic? If anyone has actually made their own games, I would prefer to hear how they got their start. Also- taking classes at my school is not an option. Nothing is offered.

    Read the article

  • Where to start with game development?

    - by steven_desu
    Searching for related questions I found a number of very specific questions, but I'm afraid the specifics have proved fruitless for me and after 4 hours on Google I'm no closer than I started, so I felt reaching out to a community might be in order. First, my goal: I've never made a game before, although I've muddled over the possibility several times. I decided to finally sit down and start learning how to code games, use game engines, etc. All so that one day (hopefully soon) I'll be able to make functional (albeit simple) games. I can start adding complexity later, for now I'd be glad to have a keyboard-controlled camera moving in a 3D world with no interaction beyond that. My background: I've worked in SEVERAL programming languages ranging from PHP to C++ to Java to ASM. I'm not afraid of any challenges that come with learning the new syntax or limitations inherent in a new language. All of my past programming experience, however, has been strictly non-graphical and usually with little or extremely simple interaction during execution. I've created extensive and brilliant algorithms for solving logical and mathematical problems. However in every case input was either defined in a file, passed form an HTML form, or typed into the console. Real-time interaction with the user is something with which I have no experience. My question: Where should I start in trying to make games? Better yet- where should I start in trying to create a keyboard-navigable 3D environment? In searching online I've found several resources linking to game engines, graphics engines, and physics engines. Here's a brief summary of my experiences with a few engines I tried: Unreal SDK: The tutorial videos assume that you already have in-depth knowledge of 3D modeling, graphics engines, animations, etc. The "Getting Started" page offers no formal explanation of game development but jumps into how Unreal can streamline processes it assumes you're already familiar with. After downloading the SDK and launching it to see if the tools were as intuitive as they claimed, I was greeted with about 60 buttons and a blank void for my 3D modeling. Clicking on "add volume" (to attempt to add a basic cube) I was met with a menu of 30 options. Panicking, I closed the editor. Crystal Space: The website seemed rather informative, explaining that Crystal Space was just for graphics and the companion software, CEL, provided entity logic for making games. A demo game was provided, which was built using "CELStart", their simple tool for people with no knowledge of game programming. I launched the game to see what I might look forward to creating. It froze several times, the menus were buggy, there were thousands of graphical glitches, enemies didn't respond to damage, and when I closed the game it locked up. Gave up on that engine. IrrLicht: The tutorial assumes I have Visual Studio 6.0 (I have Visual Studio 2010). Following their instructions I was unable to properly import the library into Visual Studio and unable to call any of the functions that they kept using. Manually copying header files, class files, and DLLs into my project's folder - the project failed to properly compile. Clearly I'm not off to a good start and I'm going in circles. Can someone point me in the right direction? Should I start by downloading a program like Blender and learning 3D modeling, or should I be learning how to use a graphics engine? Should I look for an all-inclusive game engine, or is it better to try and code my own game logic? If anyone has actually made their own games, I would prefer to hear how they got their start. Also- taking classes at my school is not an option. Nothing is offered.

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • Silverlight Cream for November 22, 2011 -- #1172

    - by Dave Campbell
    In this Issue: XAMLGeek, WindowsPhoneGeek, Nigel Sampson, Jesse Liberty, Sumit Dutta(-2-), Dave Bost, Jared Bienz, Joost van Schaik, and Michael Crump. Above the Fold: Silverlight: "10 Laps around Silverlight 5 (Part 7 of 10)" Michael Crump WP7: "Using MVVMLight, ItemsControl, Blend and behaviors to make a ‘heads up compass’" Joost van Schaik Metro/WinRT/W8: "“Badevand” for Windows 8" XAMLGeek Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:“Badevand” for Windows 8XAMLGeek posted a Metro app that shows water and air temperature and rain level for 5 beaches in Copenhagen, Denmark... no source, but good to see people posting appsGetting Started with Windows Phone RemindersWindowsPhoneGeek digs into Reminders in this WP7.1 post... the code you need, description, and a project to downloadHelp my app has been revoked!Nigel Sampson had a surprise when his latest app was revoked on his device, and then another... read what the solution wasA Dozen Windows Phone Videos… And CountingJesse Liberty posted his 12th WP7.1 video on Channel 9 - all about Reminders in MangoPart 23 - Windows Phone 7 - Detect Operator and Network InformationSumit Dutta has 2 more parts to his WP7 quest up... this part 23 is about getting mobile operator information and hot to get network capabilities using Microsoft.Phone.Net.DeviceNetworkInformationPart 24 - Windows Phone 7 - Microphone RepeaterIn part 24, Sumit Dutta uses the Microsoft.Xna.Framework Microphone class to record and play back voice.31 Days of Mango | Day #13: Marketplace Test KitDave Bost is at the helm of Jeff Blankenburg's Day 13 in his 31 day quest, discussing the Marketplace Test Kit and showing how to use it to determine if your app is ready for certification31 Days of Mango | Day #12; Beta DistributionJeff Blankenburg's Day 12 is written by guest author Jared Bienz, and shows how to submit an application for Beta testingUsing MVVMLight, ItemsControl, Blend and behaviors to make a ‘heads up compass’Joost van Schaik has a tutorial up showing how to make a WP7 Compass app using MVVMLight, Expression Blend, and then shows his thoughts on using the ItemsControl and Behaviors... code, descriptions and a project to download.... and I think I got your name right for the first time, Joost :)10 Laps around Silverlight 5 (Part 7 of 10)Michael Crump put out part 7 of his Silverlight 5 series at SilverlightShow... this is actually part 2 of OS Integration with Silverlight covering, among other things, 64-bit browser support and Power AwarenessStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • ODI 11g - Dynamic and Flexible Code Generation

    - by David Allan
    ODI supports conditional branching at execution time in its code generation framework. This is a little used, little known, but very powerful capability - this let's one piece of template code behave dynamically based on a runtime variable's value for example. Generally knowledge module's are free of any variable dependency. Using variable's within a knowledge module for this kind of dynamic capability is a valid use case - definitely in the highly specialized area. The example I will illustrate is much simpler - how to define a filter (based on mapping here) that may or may not be included depending on whether at runtime a certain value is defined for a variable. I define a variable V_COND, if I set this variable's value to 1, then I will include the filter condition 'EMP.SAL > 1' otherwise I will just use '1=1' as the filter condition. I use ODIs substitution tags using a special tag '<$' which is processed just prior to execution in the runtime code - so this code is included in the ODI scenario code and it is processed after variables are substituted (unlike the '<?' tag).  So the lines below are not equal ... <$ if ( "#V_COND".equals("1")  ) { $> EMP.SAL > 1 <$ } else { $> 1 = 1 <$ } $> <? if ( "#V_COND".equals("1")  ) { ?> EMP.SAL > 1 <? } else { ?> 1 = 1 <? } ?> When the <? code is evaluated the code is executed without variable substitution - so we do not get the desired semantics, must use the <$ code. You can see the jython (java) code in red is the conditional if statement that drives whether the 'EMP.SAL > 1' or '1=1' is included in the generated code. For this illustration you need at least the ODI 11.1.1.6 release - with the vanilla 11.1.1.5 release it didn't work for me (may be patches?). As I mentioned, normally KMs don't have dependencies on variables - since any users must then have these variables defined etc. but it does afford a lot of runtime flexibility if such capabilities are required - something to keep in mind, definitely.

    Read the article

  • Webcast Q&A: Cisco's Platform Approach to Identity Management

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on Cisco: Best Practices for a Platform Approach on Wed, March 14th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Ranjan Jain, Security Architect at Cisco for walking us through Cisco’s drivers and rationale for the platform approach, the implementation strategy, results, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies with a platform approach to Identity Management. A forward looking organization, Cisco also has plans for secure cloud and mobile access enablement so it was interesting to learn how the Platform approach to Identity Management today is laying down the foundation for those future initiatives. While we tackled a good few questions during the webcast, we have captured the responses to those that we weren’t able to get to: Q.Can you provide insight into how you approached developing profiles for each user groupA. At Cisco, the user profile was already available to IT before the platform consolidation started. There is a dedicated business team that manages the user profiles. Q. What is the current version of Oracle Identity Manager in the market?A. Oracle Identity Manager 11gR1 is the latest version of our industry leading user provisioning/identity administration solution. Q. Is data resource segmentation part of the overall strategy at Cisco?A. It is but it is managed by the business teams and not at the IT level. Q. Does Cisco also have an Active Directoy LDAP? Do they sync AD from OID or do the provision to AD as another resource?[A. Yes, we do. AD is provisioned using in-house tools and not via Oracle Identity Manager (OIM). Q. If we already have a point IDM solution in place (SSO), can the platform approach still work?A. Yes, the platform approach calls for a seamless, standardized framework for identity management to support the enterprise’s entire infrastructure, both on-premise or in the cloud. Oracle Identity Management solutions are standards based so they can easily integrate and interoperate with existing Oracle or non-Oracle solutions. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in our Customers Talk: Identity as a Platform webcast series:ING: Scaling Role Management and Access Certification to Thousands of ApplicationsWednesday, April 11th at 10 am PST/ 1 pm ESTRegister Today We are also hosting a live event series in collaboration with the Aberdeen Group. To hear first-hand, the insights from the recently released Aberdeen Report and to discuss the merits of the Platform approach, do join us at this event. You can also connect with Oracle Identity Management SMEs and get your questions answered live. Aberdeen Group Live Event Series: IAM Integrated - Analyzing the "Platform" vs. "Point Solution" ApproachNorth America, April 10 - May 22Register for an event near you And here’s the slide deck from our Cisco webcast:   Oracle_Cisco identity platform approach_webcast View more presentations from OracleIDM

    Read the article

  • Elastic PaaS with WebLogic and OpenStack, part I

    - by Jernej Kaše
    In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service. Middleware / Platform as a Service goals First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities. To break it down: - PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...) - Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer] - Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically. PaaS use case To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month. The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational. The Challenges Some of the challenges the IT Administrator is facing are: - how are we going to ensure the processing power? - how are we going to provision the (virtual) machines, Java EE platform and deploy the application? - how are we going to monitor the SLA? - how are we going to react to SLA, and increase capacity?  The Ideal Solution Ideally, the whole process should be automated, "set it and forget" and require no human interaction: - The vendor packages the solution as deployable image(s) - The images are deployed to the IaaS - From there, automated processes take care of SLA  Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris OracleSolaris provides OS and virtualisation through Solaris Zones OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API) WebLogic 12c with Dynamic Clusters provides the Platform Trafic Manager provides load balancing On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will: - provision a new virtual machine based on image which is configured for the WebLogic domain - add the machine to WebLogic domain - Increase the number of servers in dynamic cluster - Start the newly provisioned server  Stay tuned for part II The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA. Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • Should I be an algorithm developer, or java web frameworks type developer?

    - by Derek
    So - as I see it, there are really two kinds of developers. Those that do frameworks, web services, pretty-making front ends, etc etc. Then there are developers that write the algorithms that solve the problem. That is, unless the problem is "display this raw data in some meaningful way." In that case, the framework/web developer guy might be doing both jobs. So my basic problem is this. I have been an algorithms kind of software developer for a few years now. I double majored in Math and Computer science, and I have a master's in systems engineering. I have never done any web-dev work, with the exception of a couple minor jobs, and some hobby level stuff. I have been job interviewing lately, and this is what happens: Job is listed as "programmer- 5 years of experience with the following: C/C++, Java,Perl, Ruby, ant, blah blah blah" Recruiter calls me, says they want me to come in for interview In the interview, find out they have some webservices development, blah blah blah When asked in the interview, talk about my experience doing algorithms, optimization, blah blah..but very willing to learn new languages, frameworks, etc Get a call back saying "we didn't think you were a fit for the job you interviewed wtih, but our algorithm team got wind of you and wants to bring you on" This has happened to me a couple times now - see a vague-ish job description looking for a "programmer" Go in, find out they are doing some sort of web-based tool, maybe with some hardcore algorithms running in the background. interview with people for the web-based tool, but get an offer from the algorithms people. So the question is - which job is the better job? I basically just want to get a wide berth of experience at this level of my career, but are algorithm developers so much in demand? Even more so than all these supposed hot in demand web developer guys? Will I be ok in the long run if I go into the niche of math based algorithm development, and just little to no, or hobby level web-dev experience? I basically just don't want to pigeon hole myself this early. My salary is already starting to get pretty high - and I can see a company later on saying "we really need a web developer, but we'll hire this 50k/year college guy, instead of this 100k/year experience algorithm guy" Cliffs notes: I have been doing algorithm development. I consider myself to be a "good programmer." I would have no problem picking up web technologies and those sorts of frameworks. During job interviews, I keep getting "we think you've got a good skillset - talk to our algorithm team" instead of wanting me to learn new skills on the job to do their web services or whhatever other new technology they are doing. Edit: Whenever I am talking about algorithm development here - I am talking about the code that produces the answer. Typically I think of more math-based algorithms: solving a financial problem, solving a finite element method, image processing, etc

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • Copying A Slide From One Presentation To Another

    - by Tim Murphy
    There are many ways to generate a PowerPoint presentation using Open XML.  The first way is to build it by hand strictly using the SDK.  Alternately you can modify a copy of a base presentation in place.  The third approach to generate a presentation is to build a new presentation from the parts of an existing presentation by copying slides as needed.  This post will focus on the third option. In order to make this solution a little more elegant I am going to create a VSTO add-in as I did in my previous post.  This one is going to insert Tags to identify slides instead of NonVisualDrawingProperties which I used to identify charts, tables and images.  The code itself is fairly short. SlideNameForm dialog = new SlideNameForm(); Selection selection = Globals.ThisAddIn.Application.ActiveWindow.Selection;   if(dialog.ShowDialog() == DialogResult.OK) { selection.SlideRange.Tags.Add(dialog.slideName,dialog.slideName); } Zeyad Rajabi has a good post here on combining slides from two presentations.  The example he gives is great if you are doing a straight merge.  But what if you want to use your source file as almost a supermarket where you pick and chose slides and may even insert them repeatedly?  The following code uses the tags we created in the previous step to pick a particular slide an copy it to a destination file. using (PresentationDocument newDocument = PresentationDocument.Open(OutputFileText.Text,true)) { PresentationDocument templateDocument = PresentationDocument.Open(FileNameText.Text, false);   uniqueId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideMasterIdList); uint maxId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideIdList);   SlidePart oldPart = GetSlidePartByTagName(templateDocument, SlideToCopyText.Text);   SlidePart newPart = newDocument.PresentationPart.AddPart<SlidePart>(oldPart, "sourceId1");   SlideMasterPart newMasterPart = newDocument.PresentationPart.AddPart(newPart.SlideLayoutPart.SlideMasterPart);   SlideIdList idList = newDocument.PresentationPart.Presentation.SlideIdList;   // create new slide ID maxId++; SlideId newId = new SlideId(); newId.Id = maxId; newId.RelationshipId = "sourceId1"; idList.Append(newId);   // Create new master slide ID uniqueId++; SlideMasterId newMasterId = new SlideMasterId(); newMasterId.Id = uniqueId; newMasterId.RelationshipId = newDocument.PresentationPart.GetIdOfPart(newMasterPart); newDocument.PresentationPart.Presentation.SlideMasterIdList.Append(newMasterId);   // change slide layout ID FixSlideLayoutIds(newDocument.PresentationPart);     //newPart.Slide.Save(); newDocument.PresentationPart.Presentation.Save(); } The GetMaxIDFromChild and FixSlideLayoutID methods are barrowed from Zeyad’s article.  The GetSlidePartByTagName method is listed below.  It is really one LINQ query that finds SlideParts with child Tags that have the requested Name. private SlidePart GetSlidePartByTagName(PresentationDocument templateDocument, string tagName) { return (from p in templateDocument.PresentationPart.SlideParts where p.UserDefinedTagsParts.First().TagList.Descendants <DocumentFormat.OpenXml.Presentation.Tag>().First().Name == tagName.ToUpper() select p).First(); } This is what really makes the difference from what Zeyad posted.  The most powerful thing you can have when generating documents from templates is a consistent way of naming items to be manipulated.  I will be show more approaches like this in upcoming posts. del.icio.us Tags: Office Open XML,Presentation,PowerPoint,VSTO,TagList

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • ADF Code Guidelines

    - by Chris Muir
    During Oracle Open World 2012 the ADF Product Management team announced a new OTN website, the ADF Architecture Square.  While OOW represents a great opportunity to let customers know about new and exciting developments, the problem with making announcements during OOW however is customers are bombarded with so many messages that it's easy to miss something important. So in this blog post I'd like to highlight as part of the ADF Architecture Square website, one of the initial core offerings is a new document entitled ADF Code Guidelines. Now the title of this document should hopefully make it obvious what the document contains, but what's the purpose of the document, why did Oracle create it? Personally having worked as an ADF consultant before joining Oracle, one thing I noted amongst ADF customers who had successfully deployed production systems, that they all approached software development in a professional and engineered way, and all of these customers had their own guideline documents on ADF best practices, conventions and recommendations.  These documents designed to be consumed by their own staff to ensure ADF applications were "built right", typically sourced their guidelines from their team's own expert learnings, and the huge amount of ADF technical collateral that is publicly available.  Maybe from manuals and whitepapers, presentations and blog posts, some written by Oracle and some written by independent sources. Now this is all good and well for the teams that have gone through this effort, gathering all the information and putting it into structured documents, kudos to them.  But for new customers who want to break into the ADF space, who have project pressures to deliver ADF solutions without necessarily working on assembling best practices, creating such a document is understandably (regrettably?) a low priority.  So in recognising this hurdle, at Oracle we've devised the ADF Code Guidelines.  This document sets out ADF code guidelines, practices and conventions for applications built using ADF Business Components and ADF Faces Rich Client (release 11g and greater).  The guidelines are summarized from a number of Oracle documents and other 3rd party collateral, with the goal of giving developers and development teams a short circuit on producing their own best practices collateral. The document is not a final production, but a living document that will be extended to cover new information as discovered or as the ADF framework changes. Readers are encouraged to discuss the guidelines on the ADF EMG and provide constructive feedback to me (Chris Muir) via the ADF EMG Issue Tracker. We hope you'll find the ADF Code Guidelines useful and look forward to providing updates in the near future. Image courtesy of paytai / FreeDigitalPhotos.net

    Read the article

  • 25 Secrets for Faster ASP.NET: the Eagle has landed!

    - by Michaela Murray
    On Friday we launched our new free eBook, 25 Secrets for Faster ASP.NET Applications! Heading for 1000 of you have picked it up already, but if you haven’t got your copy yet, you can grab it from http://www.red-gate.com/25secrets. It’s the follow up to the wildly successful 50 Ways to Avoid, Find and Fix ASP.NET Performance Issues, which we released back in January this year (you can download from www.red-gate.com/50ways). Once again, we collected tips from some of the smartest brains in the ASP.NET community, but this time around, we’ve covered the latest stuff in the .NET framework – async/await, Web API, and more. Houston, we have a winner… In my original blogpost, I offered a Microsoft Surface as a prize for the best tip. Now, after some serious deliberation, our judges have settled on a winner. By a unanimous verdict, the prize goes to… (wait for it!) … Jeffrey Richter, for this cheeky number, Tip #1 in the new book: Want to build scalable websites and services? Work asynchronously One of the secrets to producing scalable websites and services is to perform all your I/O operations asynchronously to avoid blocking threads. When your thread issues a synchronous I/O request, the Windows kernel blocks the thread. This causes the thread pool to create a new thread, which allocates a lot of memory and wastes precious CPU time. Calling xxxAsync method and using C#’s async/await keywords allows your thread to return to the thread pool so it can be used for other things. This reduces the resource consumption of your app, allowing it to use more memory and improving response time to your clients. Congratulations Jeffrey! Of course, I also owe a massive thank you to everyone who’s been involved in the book, especially all the authors. It’s a real treat to work with a developer community that’s so keen to collaborate and to share their hard-won nuggets of performance knowhow. If you haven’t read it yet, I can’t recommend it highly enough. You can get it for free at www.red-gate.com/25secrets The full backstory for both eBooks: https://www.simple-talk.com/blogs/2012/11/15/application-performance-the-best-of-the-web/ https://www.simple-talk.com/blogs/2012/11/27/application-performance-episode-2-announcing-the-judges/ https://www.simple-talk.com/blogs/2013/01/25/free-ebook-50-ways-to-avoid-find-and-fix-asp-net-performance-issues/ https://www.simple-talk.com/blogs/2013/03/22/50-ways-to-avoid-find-and-fix-asp-net-performance-issues-the-next-generation/

    Read the article

  • HTTP Module in detail

    - by Jalpesh P. Vadgama
    I know this post may sound like very beginner level. But I have already posted two topics regarding HTTP Handler and HTTP module and this will explain how http module works in the system. I have already posted What is the difference between HttpModule and HTTPHandler here. Same way I have posted about an HTTP Handler example here as people are still confused with it. In this post I am going to explain about HTTP Module in detail. What is HTTP Module As we all know that when ASP.NET Runtimes receives any request it will execute a series of HTTP Pipeline extensible objects. HTTP Module and HTTP handler play important role in extending this HTTP Pipelines. HTTP Module are classes that will pre and post process request as they pass into HTTP Pipelines.  So It’s one kind of filter we can say which will do some procession on begin request and end request. If we have to create HTTP Module we have to implement System.Web.IHttpModule interface in our custom class. An IHTTP Module contains two method dispose where you can write your clean up code and another is Init where your can write your custom code to handle request. Here you can your event handler that will execute at the time of begin request and end request. Let’s create an HTTP Module which will just print text in browser with every request. Here is the code for that. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace Experiment { public class MyHttpModule:IHttpModule { public void Dispose() { //add clean up code here if required } public void Init(HttpApplication context) { context.BeginRequest+=new EventHandler(context_BeginRequest); context.EndRequest+=new EventHandler(context_EndRequest); } public void context_BeginRequest(object o, EventArgs args) { HttpApplication app = (HttpApplication)o; if (app != null) { app.Response.Write("<h1>Begin Request Executed</h1>"); } } public void context_EndRequest(object o, EventArgs args) { HttpApplication app = (HttpApplication)o; if (app != null) { app.Response.Write("<h1>End Request Executed</h1>"); } } } } Here in above code you can see that I have created two event handler context_Beginrequest and context_EndRequest which will execute at begin request and end request when request are processed. In this event handler I have just written a code to print text on browser. Now In order enable this HTTP Module in HTTP pipeline we have to put a settings in web.config  HTTPModules section to tell which HTTPModule is enabled. Below is code for HTTPModule. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpModules> <add name="MyHttpModule" type="Experiment.MyHttpModule,Experiment"/> </httpModules> </system.web> </configuration> Now I just have created a sample webform with following code in HTML like following. <form id="form1" runat="server"> <B>test of HTTP Module</B> </form> Now let’s run this web form in browser and you can see here it the output as expected.   Technorati Tags: HTTPModule,ASP.NET,Request

    Read the article

  • Sorting and Filtering By Model-Based LOV Display Value

    - by Steven Davelaar
    If you use a model-based LOV and you use display type "choice", then ADF nicely displays the display value, even if the table is read-only. In the screen shot below, you see the RegionName attribute displayed instead of the RegionId. This is accomplished by the model-based LOV, I did not modify the Countries view object to include a join with Regions.  Also note the sort icon, the table is sorted by RegionId. This sorting typically results in a bug reported by your test team. Europe really shouldn't come before America when sorting ascending, right? To fix this, we could of course change the Countries view object query and add a join with the Regions table to include the RegionName attribute. If the table is updateable, we still need the choice list, so we need to move the model-based LOV from the RegionId attribute to the RegionName attribute and hide the RegionId attribute in the table. But that is a lot of work for such a simple requirement, in particular if we have lots of model-based choice lists in our view object. Fortunately, there is an easier way to do this, with some generic code in your view object base class that fixes this at once for all model-based choice lists that we have defined in our application. The trick is to override the method getSortCriteria() in the base view object class. By default, this method returns null because the sorting is done in the database through a SQL Order By clause. However, if the getSortCriteria method does return a sort criteria the framework will perform in memory sorting which is what we need to achieve sorting by region name. So, inside this method we need to evaluate the Order By clause, and if the order by column matches an attribute that has a model-based LOV choicelist defined with a display attribute that is different from the value attribute, we need to return a sort criterria. Here is the complete code of this method: public SortCriteria[] getSortCriteria() {   String orderBy = getOrderByClause();          if (orderBy!=null )   {     boolean descending = false;     if (orderBy.endsWith(" DESC"))      {       descending = true;       orderBy = orderBy.substring(0,orderBy.length()-5);     }     // extract column name, is part after the dot     int dotpos = orderBy.lastIndexOf(".");     String columnName = orderBy.substring(dotpos+1);     // loop over attributes and find matching attribute     AttributeDef orderByAttrDef = null;     for (AttributeDef attrDef : getAttributeDefs())     {       if (columnName.equals(attrDef.getColumnName()))       {         orderByAttrDef = attrDef;         break;       }     }     if (orderByAttrDef!=null && "choice".equals(orderByAttrDef.getProperty("CONTROLTYPE"))          && orderByAttrDef.getListBindingDef()!=null)     {       String orderbyAttr = orderByAttrDef.getName();       String[] displayAttrs = orderByAttrDef.getListBindingDef().getListDisplayAttrNames();       String[] listAttrs = orderByAttrDef.getListBindingDef().getListAttrNames();       // if first list display attributes is not the same as first list attribute, than the value       // displayed is different from the value copied back to the order by attribute, in which case we need to       // use our custom comparator       if (displayAttrs!=null && listAttrs!=null && displayAttrs.length>0 && !displayAttrs[0].equals(listAttrs[0]))       {                  SortCriteriaImpl sc1 = new SortCriteriaImpl(orderbyAttr, descending);         SortCriteria[] sc = new SortCriteriaImpl[]{sc1};         return sc;                           }     }     }   return super.getSortCriteria(); } If this method returns a sort criteria, then the framework will call the sort method on the view object. The sort method uses a Comparator object to determine the sequence in which the rows should be returned. This comparator is retrieved by calling the getRowComparator method on the view object. So, to ensure sorting by our display value, we need to override this method to return our custom comparator: public Comparator getRowComparator() {   return new LovDisplayAttributeRowComparator(getSortCriteria()); } The custom comparator class extends the default RowComparator class and overrides the method compareRows and looks up the choice display value to compare the two rows. The complete code of this class is included in the sample application.  With this code in place, clicking on the Region sort icon nicely sorts the countries by RegionName, as you can see below. When using the Query-By-Example table filter at the top of the table, you typically want to use the same choice list to filter the rows. One way to do that is documented in ADF code corner sample 16 - How To Customize the ADF Faces Table Filter.The solution in this sample is perfectly fine to use. This sample requires you to define a separate iterator binding and associated tree binding to populate the choice list in the table filter area using the af:iterator tag. You might be able to reuse the same LOV view object instance in this iterator binding that is used as view accessor for the model-bassed LOV. However, I have seen quite a few customers who have a generic LOV view object (mapped to one "refcodes" table) with the bind variable values set in the LOV view accessor. In such a scenario, some duplicate work is needed to get a dedicated view object instance with the correct bind variables that can be used in the iterator binding. Looking for ways to maximize reuse, wouldn't it be nice if we could just reuse our model-based LOV to populate this filter choice list? Well we can. Here are the basic steps: 1. Create an attribute list binding in the page definition that we can use to retrieve the list of SelectItems needed to populate the choice list <list StaticList="false" Uses="LOV_RegionId"               IterBinding="CountriesView1Iterator" id="RegionId"/>  We need this "current row" list binding because the implicit list binding used by the item in the table is not accessible outside a table row, we cannot use the expression #{row.bindings.RegionId} in the table filter facet. 2. Create a Map-style managed bean with the get method retrieving the list binding as key, and returning the list of SelectItems. To return this list, we take the list of selectItems contained by the list binding and replace the index number that is normally used as key value with the actual attribute value that is set by the choice list. Here is the code of the get method:  public Object get(Object key) {   if (key instanceof FacesCtrlListBinding)   {     // we need to cast to internal class FacesCtrlListBinding rather than JUCtrlListBinding to     // be able to call getItems method. To prevent this import, we could evaluate an EL expression     // to get the list of items     FacesCtrlListBinding lb = (FacesCtrlListBinding) key;     if (cachedFilterLists.containsKey(lb.getName()))     {       return cachedFilterLists.get(lb.getName());     }     List<SelectItem> items = (List<SelectItem>)lb.getItems();     if (items==null || items.size()==0)     {       return items;     }     List<SelectItem> newItems = new ArrayList<SelectItem>();     JUCtrlValueDef def = ((JUCtrlValueDef)lb.getDef());     String valueAttr = def.getFirstAttrName();     // the items list has an index number as value, we need to replace this with the actual     // value of the attribute that is copied back by the choice list     for (int i = 0; i < items.size(); i++)     {       SelectItem si = (SelectItem) items.get(i);       Object value = lb.getValueFromList(i);       if (value instanceof Row)       {         Row row = (Row) value;         si.setValue(row.getAttribute(valueAttr));                 }       else       {         // this is the "empty" row, set value to empty string so all rows will be returned         // as user no longer wants to filter on this attribute         si.setValue("");       }       newItems.add(si);     }     cachedFilterLists.put(lb.getName(), newItems);     return newItems;   }   return null; } Note that we added caching to speed up performance, and to handle the situation where table filters or search criteria are set such that no rows are retrieved in the table. When there are no rows, there is no current row and the getItems method on the list binding will return no items.  An alternative approach to create the list of SelectItems would be to retrieve the iterator binding from the list binding and loop over the rows in the iterator binding rowset. Then we wouldn't need the import of the ADF internal oracle.adfinternal.view.faces.model.binding.FacesCtrlListBinding class, but then we need to figure out the display attributes from the list binding definition, and possible separate them with a dash if multiple display attributes are defined in the LOV. Doable but less reuse and more work. 3. Inside the filter facet for the column create an af:selectOneChoice with the value property of the f:selectItems tag referencing the get method of the managed bean:  <f:facet name="filter">   <af:selectOneChoice id="soc0" autoSubmit="true"                       value="#{vs.filterCriteria.RegionId}">     <!-- attention: the RegionId list binding must be created manually in the page definition! -->                       <f:selectItems id="si0"                    value="#{viewScope.TableFilterChoiceList[bindings.RegionId]}"/>   </af:selectOneChoice> </f:facet> Note that the managed bean is defined in viewScope for the caching to take effect. Here is a screen shot of the tabe filter in action: You can download the sample application here. 

    Read the article

  • WinMo&rsquo;s Demise: Notifying Next of &ldquo;Kin&rdquo;

    - by andrewbrust
    This past Monday, April 12th, Visual Studio 2010 was launched.  And on that same day, Microsoft also launched a new line of  mobile phone handsets, called Kin.  The two product launches are actually connected, but only by what they do not have in common, and what they commonly lack. On the former point: VS 2010 had released to manufacturing a couple weeks prior to its launch.  The Kin phones, meanwhile are not yet available.  We don’t even know what they will cost.  (And I think cost will be a major factor in Kin’s success…I told ChannelWeb’s Yara Souza so in this article). What do the two products both lack? Simple: Windows Mobile 6.x. For example, Kin seems to be based on the same platform as Windows Phone 7 (albeit a subset).  And VS 2010 does not support .NET Compact Framework development, which means no .NET development support for WinMo 6.x and earlier. So I guess April 12th marks Windows Phone “clean slate day.”  If you want to develop for the old phone platform, you will need to use the old version of Visual Studio (i.e. 2008).  Luckily VS 2010 and 2008 can be installed side-by-side.  But I doubt that’s much consolation to developers who still target WinMo 6.5 and earlier. Remember, WinMo isn’t just about the phone.  There are all sorts of non-telephony mobile devices, including ruggedized Pocket PC-style instruments, bar code readers and shop-floor-deployed units that don’t run Windows Phone 7 and couldn’t, even if they wanted to. Where will developers in these markets go?  I would guess some will stick with WinMo 6.x and earlier, until Windows Phone 7 can handle their workloads, assuming that does indeed happen.  Others will likely go to Google’s Android platform. For OEMs and developers who need a customizable mobile software stack, Android is turning out to be out-WinMo-ing WinMo.  As I wrote in this post, Google took Microsoft’s model (minus the licensing fees) and combined it with a modern SmartPhone feature set (rather than a late 90s/early oughts PDA paradigm), to great success.  You might say Google embraced and extended. You might also say Microsoft shunned and withdrew.

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Networking Guidelines

    - by ACShorten
    One of the things I have noticed in my years in IT is the changes in networking. In the past networking was pretty simple with the host name and name resolution (via DNS) being pretty simple. Some sites still use this simple networking setup. These days, more complex name resolution, proxies, firewalls, demarcation nd virtualization, can make networking more complex. This can cause issues when installing products with in built networking that can frustrate even seasoned veterans. I have put together a few basic guidelines to hopefully help along with product installation and getting a product to operate in a somewhat complex network setup. All the components of the product (including the infrastructure) need to communicate via a network (even it is within a local machine/host). Ensure any host names referred to within configuration files are accessible via your networking setup. This may mean defining the hosts to the machines, to the DNS for name resolution and even your firewall to allow machines to communicate within your network. Make sure the ports used for any of the infrastructure are accessible (even through your firewall) and are unique within the host. Host duplication can cause the product to fail on startup as the port is already in use. If there are still issues, consider using localhost as your host name. I have used this in so many situations that I tend to use it now as a default anytime I install anything myself. Most Oracle products suggest to use localhost when using dynamic host or dynamic IP addresses and this is no different for the Oracle Utilities Application Framework. If you do use localhost then installing a Loopback Adapter for the operating system is recommended to force networking to a minimum. Usually localhost resolves to 127.0.0.1. When using multiple network connections, especially in a virtualized environment, ensure the host and ports used are relevent for the network cards you have setup. One of the common issues is finding the product is using a vierualized network card only to find that it is not setup for correct networking. If you are using the batch component, do not forget to ensure that the multicast protocol is enabled on your host and that the multicast address and port number specified are valid and accessible from all machines in the batch cluster (if clustering used). The same advice applies if you are using unicast where each host/port combination should be accessible. Hopefully these basic networking recommendations will help minimize any networking issues you might encounter.

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >