Search Results

Search found 3526 results on 142 pages for 'technical bard'.

Page 128/142 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • Exitus Acta Probat: The Post-Processing Module

    - by Phil Factor
    Sometimes, one has to make certain ethical compromises to ensure the success of a corporate IT project. Exitus Acta Probat (literally 'the result validates the deeds' meaning that the ends justify the means)It was a while back, whilst working as a Technical Architect for a well-known international company, that I was given the task of designing the architecture of a rather specialized accounting system. We'd tried an off-the-shelf (OTS) Windows-based solution which crashed with dispiriting regularity, and didn't quite do what the business required. After a great deal of research and planning, we commissioned a Unux-based system that used X-terminals for the desktops of  the participating staff. X terminals are now obsolete, but were then hot stuff; stripped-down Unix workstations that provided client GUIs for networked applications long before the days of AJAX, Flash, Air and DHTML. I've never known a project go so smoothly: I'd been initially rather nervous about going the Unix route, believing then that  Unix programmers were excitable creatures who were prone to  indulge in role-play enactments of elves and wizards at the weekend, but the programmers I met from the company that did the work seemed to be rather donnish, earnest, people who quickly grasped our requirements and were faultlessly professional in their work.After thinking lofty thoughts for a while, there was considerable pummeling of keyboards by our suppliers, and a beautiful robust application was delivered to us ahead of dates.Soon, the department who had commissioned the work received shiny new X Terminals to replace their rather depressing lavatory-beige PCs. I modestly hung around as the application was commissioned and deployed to the department in order to receive the plaudits. They didn't come. Something was very wrong with the project. I couldn't put my finger on the problem, and the users weren't doing any more than desperately and futilely searching the application to find a fault with it.Many times in my life, I've come up against a predicament like this: The roll-out of an application goes wrong and you are hearing nothing that helps you to discern the cause but nit-*** noise. There is a limit to the emotional heat you can pack into a complaint about text being in the wrong font, or an input form being slightly cramped, but they tried their best. The answer is, of course, one that every IT executive should have tattooed prominently where they can read it in emergencies: In Vino Veritas (literally, 'in wine the truth', alcohol loosens the tongue. A roman proverb) It was time to slap the wallet and get the department down the pub with the tab in my name. It was an eye-watering investment, but hedged with an over-confident IT director who relished my discomfort. To cut a long story short, The real reason gushed out with the third round. We had deprived them of their PCs, which had been good for very little from the pure business perspective, but had provided them with many hours of happiness playing computer-based minesweeper and solitaire. There is no more agreeable way of passing away the interminable hours of wage-slavery than minesweeper or solitaire, and the employees had applauded the munificence of their employer who had provided them with the means to play it. I had, unthinkingly, deprived them of it.I held an emergency meeting with our suppliers the following day. I came over big with the notion that it was in their interests to provide a solution. They played it cool, probably knowing that it was my head on the block, not theirs. In the end, they came up with a compromise. they would temporarily descend from their lofty, cerebral stamping grounds  in order to write a server-based Minesweeper and Solitaire game for X Terminals, and install it in a concealed place within the system. We'd have to pay for it, though. I groaned. How could we do that? "Could we call it a 'post-processing module?" suggested their account executive.And so it came to pass. The application was a resounding success. Every now and then, the staff were able to indulge in some 'post-processing', with what turned out to be a very fine implementation of both minesweeper and solitaire. There were several refinements: A single click in a 'boss' button turned the games into what looked just like a financial spreadsheet.  They even threw in a multi-user version of Battleships. The extra payment for the post-processing module went through the change-control process without anyone untoward noticing, and peace once more descended. Only one thing niggles. Those games were good. Do they still survive, somewhere in a Linux library? If so, I'd like to claim a small part in their production.

    Read the article

  • Oracle’s Web Experience Management

    - by Christie Flanagan
    Today’s guest post on Oracle’s Web Experience Management comes from a member of our WebCenter Evangelist team, Noël Jaffré, a Principal Technologist based in France.Oracle’s Web Experience Management (WEM) solution enables organizations to optimize the online channel for driving marketing and customer experience management success. It empowers business users to manage the web presence and create rich and engaging online experiences for customers and prospects. Oracle's WEM platform provides a framework to simplify the integration of Oracle, third-party and custom-built applications. This framework essentially allows the creation and integration of applications using one single business interface called the WEM interface. It includes the following: Single sign-on access control for all integrated applications using the Central Authentication Service (CAS) component. A single centralized administration window for user, role, and native applications management including site management. Community server management, gadget server management as well as management for partner integrated technologies. A Representational State Transfer (REST) API for accessing WebCenter Sites data. REST services are supported on both Oracle WebCenter Sites and Oracle WebCenter Sites Satellite Server to leverage the satellite server cache. All REST requests are cached for web consuming applications as well for the high performance delivery of native applications on the mobile channel. Oracle WebCenter Sites’ Web Experience Management environment enables organizations to deliver a compelling online experience to customers by simplifying the deployment and management of sophisticated and engaging websites. The WebCenter Sites platform automates the entire process of managing web content including: Authoring:  Business users can easily contribute and manage web content in real-time, with intuitive interfaces and drag-and-drop content authoring and layout capabilities designed for the non-technical user. Contextual Content Targeting: Marketers are empowered to create and manage targeted campaigns with relevant recommendations and promotions based on the context of the session of the visitor such as his or her navigation history, user profile, language, location or other information shared during the visitor session. Content Publishing and Deployment: It offers advanced multi-site management capabilities for departmental or regional sites, as well as strong multi-lingual and multi-locale content management. The remote satellite server caching infrastructure provides high-performance, distributed caching, tuned to deliver high-volume, targeted and multi-lingual sites. Analytics and Optimization: Business users and marketers have the ability to measure the effectiveness of their online content and campaigns at a granular level. Editors and marketers can immediately determine whether a given article or promotion is relevant to a particular customer segment. User-generated Content: Marketers can enable blogs, comments, rating and reviews on the website.  All comments and reviews posted to the website can be moderated from the administrator interface either manually or automatically using filters, whitelists, blacklists or community based moderation. Personalized Gadget Dashboards:  Site managers can deploy gadgets, small applications using web content, individually or as part of dashboards containing multiple gadgets.  These gadget dashboards enable site visitors to create their own “MyPage” on a given site where they can select and customize the gadgets that the site administrator has made available.  Any gadget that conforms to the iGoogle/OpenSocial standard can be made available to site visitors, or they can be created within the WEM interface. Oracle's WEM platform also provides a unique environment for the delivery of a rich, multichannel online experience for site visitors through its advanced management modules for mobile. With Oracle’s WEM solution, it’s easy to control branding and deliver a consistent message while repurposing web content for publication to mobile devices, kiosks and much more. This distinctive approach provides: HTML5 Delivery: HTML5 delivery which includes native support for adaptive design that responds to the user’s computer screen resolution and orientation. The approach is less driven by the particular hardware and more driven by the user’s interactions with the device. In other words, this approach takes both the screen interactions (either cursor or touch) and screen sizes and orientation into consideration. A Unique Native Mobile Extension Environment for Contributors: From the WEM interface, a contributor can directly manage their mobile channel, using the tooling already in place for driving the traditional web presence. This includes the mobile presentation, as well as mobile insite editing, drag and drop page layout, and in-context recommendations and personalization. Optimized REST APIs for High Performance Content Delivery on Native Mobile Device Applications: WebCenter Sites’ REST API uses the underlying HTTP methods (GET, POST, PUT, DELETE) to interact with resources. Resources support two types of input and output formats -- XML and JSON. REST calls are customizable to optimize the interactions between the content repositories and the client applications. Caching is essential to decrease network loads and improve overall reliability and usability of the applications and user interactions. REST results are cached through the highly efficient Oracle WebCenter Sites caching architecture.

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • How to Answer a Stupid Interview Question the Right Way

    - by AjarnMark
    Have you ever been asked a stupid question during an interview; one that seemed to have no relation to the job responsibilities at all?  Tech people are often caught off-guard by these apparently irrelevant questions, but there is a way you can turn these to your favor.  Here is one idea. While chatting with a couple of folks between sessions at SQLSaturday 43 last weekend, one of them expressed frustration over a seemingly ridiculous and trivial question that she was asked during an interview, and she believes it cost her the job opportunity.  The question, as I remember it being described was, “What is the largest byte measurement?”.  The candidate made up a guess (“zetabyte”) during the interview, which is actually closer than she may have realized.  According to Wikipedia, there is a measurement known as zettabyte which is 10^21, and the largest one listed there is yottabyte at 10^24. My first reaction to this question was, “That’s just a hiring manager that doesn’t really know what they’re looking for in a candidate.  Furthermore, this tells me that this manager really does not understand how to build a team.”  In most companies, team interaction is more important than uber-knowledge.  I didn’t ask, but this could also be another geek on the team trying to establish their Alpha-Geek stature.  I suppose that there are a few, very few, companies that can build their businesses on hiring only the extreme alpha-geeks, but that certainly does not represent the majority of businesses in America. My friend who was there suggested that the appropriate response to this silly question would be, “And how does this apply to the work I will be doing?” Of course this is an understandable response when you’re frustrated because you know you can handle the technical aspects of the job, and it seems like the interviewer is just being silly.  But it is also a direct challenge, which may not be the best approach in interviewing.  I do have to admit, though, that there are those folks who just won’t respect you until you do challenge them, but again, I don’t think that is the majority. So after some thought, here is my suggestion: “Well, I know that there are petabytes and exabytes and things even larger than that, but I haven’t been keeping up on my list of Greek prefixes that have not yet been used, so I would have to look up the exact answer if you need it.  However, I have worked with databases as large as 30 Terabytes.  How big are the largest databases here at X Corporation?”  Perhaps with a follow-up of, “Typically, what I have seen in companies that have databases of your size, is that the three biggest challenges they face are: A, B, and C.  What would you say are the top 3 concerns that you would like the person you hire to be able to address?…Here is how I have dealt with those concerns in the past (or ‘Here is how I would tackle those issues for you…’).” Wait! What just happened?!  We took a seemingly irrelevant and frustrating question and turned it around into an opportunity to highlight our relevant skills and guide the conversation back in a direction more to our liking and benefit.  In more generic terms, here is what we did: Admit that you don’t know the specific answer off the top of your head, but can get it if it’s truly important to the company. Maybe for some reason it really is important to them. Mention something similar or related that you do know, reassuring them that you do have some knowledge in that subject area. Draw a parallel to your past work experience. Ask follow-up questions about the company’s specific needs and discuss how you can fulfill those. This type of thing requires practice and some forethought.  I didn’t come up with this answer until a day later, which is too late when you’re interviewing.  I still think it is silly for an interviewer to ask something like that, but at least this is one way to spin it to your advantage while you consider whether you really want to work for someone who would ask a thing like that.  Remember, interviewing is a two-way process.  You’re deciding whether you want to work there just as much as they are deciding whether they want you. There is always the possibility that this was a calculated maneuver on the part of the hiring manager just to see how quickly you think on your feet and how you handle stupid questions.  Maybe he knows something about the work environment and he’s trying to gauge whether you’ll actually fit in okay.  And if that’s the case, then the above response still works quite well.

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Oracle Business Intelligence Advanced - Hands-on Workshop para Parceiros - 18 a 21 de Janeiro

    - by Claudia Costa
    Workshop Description This FREE hands-on workshop highlights strengths of OBIEE 11g by providing attendees a hands-on experience with BI 11g product. OBIEE 11g has adopted the standardized infrastructure of Fusion Middleware to provide robust server capability along with highly anticipated advanced visualization components like Maps, Flash based charts, Scorecards and KPIs. This workshop focuses on new features and infrastructure components for the BI practitioners who are familiar with either OBIEE 10g or previous BI releases. After taking this course, Oracle Business Intelligence 11g Advanced, you will gain insight into OBIEE11g technology, reporting solutions and new features. Workshop provides opportunities to practice with OBIEE11g environment as hands on activities. Participant will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like, new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Advanced Visualization. If you are a Business Intelligence practitioners and familiar with BI10g - you cannot afford to miss this 3-day workshop. Register Now! PresentationsBusiness Intelligence EE (OBIEE) 11g: Advanced Workshop ·         OBIEE 11g Overview ·         OBIEE 11g Architecture and Infrastructure ·         OBIEE 11g Installation, Configuration and Monitoring ·         OBIEE11g Security Model and BI Components ·         OBIEE 11g Homepage Overview ·         New Visualizations: Master-Detail Events, Charts, Hierarchies ·         Reports Building with OBIEE 11g and Catalog Management ·         Spatial Integration, Action Framework, Scorecards ·         OBIEE 11g Dashboards ·         OBIEE Integration Options  Lab OutlineOracle Business Intelligence (OBIEE) 11g: Advanced Workshop The labs enable OBIEE Core functionality through hands-on activities are based on a Oracle VirtualBox image with software and training samples pre-installed. This Advanced course has few labs optional during the workshop to allow for students to practice them on their own. The primary purpose of the workshop is to provide expertise of 11g features and infrastructure changes from 10g. Labs will allow you to explore concepts to: ·         Have a clear understanding of the OBIEE 11g architecture ·         Have a clear understanding of the OBIEE differentiators ·         OBIEE11g Security Model ·         OBIEE11g Environment Management ·         Report Building with OBIEE11g ·         OBIEE11g Dashboard and Homepage Environment ·         New Visualization features ·         Management of Reports, Dashboards and BI Catalog Objects Audience ·         Business Intelligence Evangelist ·         Business Intelligence Application Developer or Consultant ·         Data Warehouse Developer ·         Enterprise Architects ·         Industry Solutions Architects Prerequisites ·         Experience and Understanding of OBIEE 10g is required. ·         Good understanding of data modeling for reporting purpose ·         Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops. Attendee laptops must meet the following minimum hardware/software requirements: OBIEE 11g environments requires at least 3 GB of RAM (4GB Preferred), without which student will not be able to complete labs. This workshop has environment that includes VM Image and also a software components that students will install on their laptop for the labs. ·         Minimum 3GB RAM. 25GB free disk space ·         Internet Explorer 7 ·         VirtualBox (the latest version) ·         Downloadable from http://www.virtualbox.org ·         WINRAR or 7zip ·         Downloadable from http://www.win-rar.com/download.html ·         Downloadable from http://www.7zip.com/ Attendees will be given a VirtualBox image for Oraclee BI 11g Workshop containing the software along with required toolset, database and data sets for the labs. AgendaThis class duration is 3 Days9:00am: Sign-in and Technical Set up9:30am : Workshop Starts5:00pm : Workhop Ends LocalHotel Holiday Inn Express - Porto Salvo - Lisboa This class is Free. Register early to confirm a seat! Oracle BI Advanced 11g Hands-on Workshop - Schedule Register Now! January 11-13, 2011: Kista, Sweden January 18-20, 2011: Lisbon, Portugal March 1-3, 2011: Reading, Berkshire, UK March 15-17, 2011: Colombes, Paris, France March 29-31, 2011: Amsterdam, Netherlands Questions? For registration questions please send an email to [email protected]. Para outras informações, por favor contacte Claudia Costa, telf: 214235027 ou pelo email   

    Read the article

  • How to nest transactions nicely - &quot;begin transaction&quot; vs &quot;save transaction&quot; and SQL Server

    - by Brian Biales
    Do you write stored procedures that might be used by others?  And those others may or may not have already started a transaction?  And your SP does several things, but if any of them fail, you have to undo them all and return with a code indicating it failed? Well, I have written such code, and it wasn’t working right until I finally figured out how to handle the case when we are already in a transaction, as well as the case where the caller did not start a transaction.  When a problem occurred, my “ROLLBACK TRANSACTION” would roll back not just my nested transaction, but the caller’s transaction as well.  So when I tested the procedure stand-alone, it seemed to work fine, but when others used it, it would cause a problem if it had to rollback.  When something went wrong in my procedure, their entire transaction was rolled back.  This was not appreciated. Now, I knew one could "nest" transactions, but the technical documentation was very confusing.  And I still have not found the approach below documented anywhere.  So here is a very brief description of how I got it to work, I hope you find this helpful. My example is a stored procedure that must figure out on its own if the caller has started a transaction or not.  This can be done in SQL Server by checking the @@TRANCOUNT value.  If no BEGIN TRANSACTION has occurred yet, this will have a value of 0.  Any number greater than zero means that a transaction is in progress.  If there is no current transaction, my SP begins a transaction. But if a transaction is already in progress, my SP uses SAVE TRANSACTION and gives it a name.  SAVE TRANSACTION creates a “save point”.  Note that creating a save point has no effect on @@TRANCOUNT.  So my SP starts with something like this: DECLARE @startingTranCount int SET @startingTranCount = @@TRANCOUNT IF @startingTranCount > 0 SAVE TRANSACTION mySavePointName ELSE BEGIN TRANSACTION -- … Then, when ready to commit the changes, you only need to commit if we started the transaction ourselves: IF @startingTranCount = 0 COMMIT TRANSACTION And finally, to roll back just your changes so far: -- Roll back changes... IF @startingTranCount > 0 ROLLBACK TRANSACTION MySavePointName ELSE ROLLBACK TRANSACTION Here is some code that you can try that will demonstrate how the save points work inside a transaction. This sample code creates a temporary table, then executes selects and updates, documenting what is going on, then deletes the temporary table. if running in SQL Management Studio, set Query Results to: Text for best readability of the results. -- Create a temporary table to test with, we'll drop it at the end. CREATE TABLE #ATable( [Column_A] [varchar](5) NULL ) ON [PRIMARY] GO SET NOCOUNT ON -- Ensure just one row - delete all rows, add one DELETE #ATable -- Insert just one row INSERT INTO #ATable VALUES('000') SELECT 'Before TRANSACTION starts, value in table is: ' AS Note, * FROM #ATable SELECT @@trancount AS CurrentTrancount --insert into a values ('abc') UPDATE #ATable SET Column_A = 'abc' SELECT 'UPDATED without a TRANSACTION, value in table is: ' AS Note, * FROM #ATable BEGIN TRANSACTION SELECT 'BEGIN TRANSACTION, trancount is now ' AS Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '123' SELECT 'Row updated inside TRANSACTION, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION MySavepoint SELECT 'Save point MySavepoint created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '456' SELECT 'Updated after MySavepoint created, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION point2 SELECT 'Save point point2 created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '789' SELECT 'Updated after point2 savepoint created, value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION point2 SELECT 'Just rolled back savepoint "point2", value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION MySavepoint SELECT 'Just rolled back savepoint "MySavepoint", value in table is: ' AS Note, * FROM #ATable SELECT 'Both save points were rolled back, transaction count still:' as Note, @@TRANCOUNT AS TranCount ROLLBACK TRANSACTION SELECT 'Just rolled back the entire transaction..., value in table is: ' AS Note, * FROM #ATable DROP TABLE #ATable The output should look like this: Note                                           Column_A ---------------------------------------------- -------- Before TRANSACTION starts, value in table is:  000 CurrentTrancount ---------------- 0 Note                                               Column_A -------------------------------------------------- -------- UPDATED without a TRANSACTION, value in table is:  abc Note                                 TranCount ------------------------------------ ----------- BEGIN TRANSACTION, trancount is now  1 Note                                                Column_A --------------------------------------------------- -------- Row updated inside TRANSACTION, value in table is:  123 Note                                                   TranCount ------------------------------------------------------ ----------- Save point MySavepoint created, transaction count now: 1 Note                                                   Column_A ------------------------------------------------------ -------- Updated after MySavepoint created, value in table is:  456 Note                                              TranCount ------------------------------------------------- ----------- Save point point2 created, transaction count now: 1 Note                                                        Column_A ----------------------------------------------------------- -------- Updated after point2 savepoint created, value in table is:  789 Note                                                     Column_A -------------------------------------------------------- -------- Just rolled back savepoint "point2", value in table is:  456 Note                                                          Column_A ------------------------------------------------------------- -------- Just rolled back savepoint "MySavepoint", value in table is:  123 Note                                                        TranCount ----------------------------------------------------------- ----------- Both save points were rolled back, transaction count still: 1 Note                                                            Column_A --------------------------------------------------------------- -------- Just rolled back the entire transaction..., value in table is:  abc

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • Vacations on Rodrigues 2014

    And now something completely different compared to the usual technical or community related articles here on this blog. Yes, this time I'm writing some lines on my (and my family's) activities during our long weekend stay on Rodrigues. So, please bear with me, it's eventually a bit more personal... Grab a soda, some popcorn and a cosy place to continue to read. var googleAlbumLink = "https://plus.google.com/photos/117698191428446859536/albums/6047895311458281985"; //optional----------------------- var mySlideWidth = 580; var mySlideHeight = 340; var mySlideDelay = 7000; //delay in milliseconds Special promotions during school holidays Originally, our children started to ask more frequently about going on the plane again. Obviously, after their aunty from Germany was around during May, they were really eager to travel again. So, we decided that it might be a great opportunity to book some vacations during their school holidays. And just in time the local hotels and hotel groups started to advertise their special promotions for citizens and residents. After collecting multiple brochures over several days, we got attracted by various hotel packages on Rodrigues - most interestingly the expenses for the stay and flight ticket were less compared to other resorts here on the main island. As we have been to Rodrigues already back in 2008, we followed up on this idea and got in touch with a couple travel agencies. Well, I have to report that you should be really careful about the promotions from some of them. We had a very negative experience with Shamal Travel Agency in Quatre Bornes regarding their adverts and the actual price levels and age definition for children. Please, stay away from them if you are interested in transparent cost and services. Anyway, after some arrangements with two other close families we managed to confirm our stay at the Cotton Bay Hotel in Rodrigues. Given the fact that we already stayed there, and the hotel has been renovated recently, and it is under new management all looked very promising and relaxed for our vacation. Counting the days... As we already booked in July our children were counting down the days. And it got more interesting as soon as they were on school holidays finally. Well, the day arrived and waking them up at 2:30 hrs wasn't a problem after all. Quite the opposite it was fascinating for us parents to watch them waiting for the transport and later on during the airport transfer. Despite the early hours both didn't fall asleep and it was all so exciting. We are taking the plane! Well organised by the Cotton Bay Hotel Honestly, it was a breeze and a smooth ride during our stay at the hotel. From the airport transfer, the cleanliness of our bungalow, the organisation of our day trips, and the SPA - all very well and enjoyable. The children had great fun, and although it was a bit too windy to plunge into the pool they had a lot of fun with other activities on the beach and at the Kid's Club. Oh, and we had our private petting zoo with cows, sheep and goats just close to the terrace. Some of us went to check out the SPA facilities and I have to admit that the services regarding Hammam and Sauna are better than at some other hotels in Mauritius. I don't know after how many months or years I was once again enjoying a very hot sauna. Little draw-back but nothing to worry about... There is no cold water or at least ice cubes to cool down the body, but hey there was a nice breeze coming over the hills. Some day trips to mention Based on a friend's recommendation we walked to a "restaurant" called Chez Solange & Robert. Hahaha, restaurant is widely stretched in this case, as we enjoyed a great BBQ with fresh lobster, whole fish, and pieces of chicken breast in an open cottage. Just some wooden structure covered with dried palm leaves on the roof - island feeling pure! The other day we went to the Giant Tortoise & Cave Reserve Francois Leguat to observe the giant Aldabra turtles and to visit the Grande Caverne. The biggest limestone cave on the island. Compared to our last visit this was a novelty after checking out the Caverne Partate. The formations of stalactites and stalagmites are very impressive and imaginative. Our guide had lots of funny terms and despite the low light conditions the kids had a great time wandering around on the narrow wooden paths and stairs. And last but not least, we decided to check out the Tyrodrig zip lines... Everyone was allowed to join the trip through the air, and our little ones stayed close to our field guides. But finally went on their own on the very last traversal. Puuuh, it was astounishing to glide over the valley, and for sure something to repeat next time. Impressions of our vacation on Rodrigues 2014   Next stay has been discussed already Oh yes, Rodrigues baby! We are going to come again! Tentative dates have been discussed already and now it's up to us to earn enough our next holiday on that wonderful remote piece of paradise. Eventually, a little bit longer than this time. We'll see...

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

  • Parner Webcast - Innovations in Products Program

    - by Richard Lefebvre
    We are pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Topics Speaker To Register Fusion Applications Technology and Extensibility: A next-generation platform that adapts to client needs. Matthew Johnson, Sr. Director, SCM Product Development, EMEA CLICK HERE Fusion Applications - Transforming your Back-Office Accounting Function: Changing how people work in back office functions to drive value add Liam Nolan, Director, ERP Product Development, EMEA CLICK HERE Fusion HCM & Talent Overview & Extensibility: A more in-depth look into a personalized HCM solution Synco Jonkeren, Vice-President HCM Product Development & Management, EMEA CLICK HERE Fusion HCM Compensation Planning: Compensate To Compete Rosie Warner, Director, HCM Sales Development CLICK HERE Enterprise PLM for the Product Value Chain: Oracle Enterprise PLM offers Industry specific solutions that cover the Product Value Chain Ulf Köster, Sales Development Leader Enterprise PLM, Oracle Western Europe CLICK HERE Oracle's Asset Management and Maintenance Solution: What you need to know to successfully implement Oracle Asset Management solutions within Oracle Installed Base Philip Carey, Asset Management and Maintenance Solution Specialist CLICK HERE For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months. The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. Duration Maximum 1 hour For further information please contact me Markku Rouhiainen. Recent Innovations in Products presentations Applications Products presented on April the 2nd, 2012 Speaker To Register Fusion CRM: Effective, Efficient and Easy James Penfold , Senior Director, Applications Product Development and Product Management CLICK HERE Fusion HCM: Talent management overview performance, goals, talent review Jaime Losantos Viñolas, Director, HCM Sales Development CLICK HERE Distributed Order Management - Fusion SCM Solution Vikram K Singla, Business Development Director, Supply Chain Management Applications, UK CLICK HERE Oracle Transportation Management Dominic Regan, Senior Director Oracle Transportation Management EMEA CLICK HERE Oracle Value Chain Planning: Demantra Sales & Operation Planning and Demantra Demand Management Lionel Albert, Senior Director Value Chain Planning, EMEA CLICK HERE Oracle CX (Customer Experience) - formerly CEM: Powering Great Customer Experiences Maria Ramirez , CRM Presales Consultant, EPC CLICK HERE EPM 11.1.2.2 Overview Nicholas Cox , EMEA Sales Development Director - Enterprise Performance Management CLICK HERE Oracle Hyperion Profitability and Cost Management, 11.1.2.1 Daniela Lazar , Senior EPM Sales Consultant, EPC CLICK HERE January the 16th 2012 Speaker To Register CRM / ATG: Best-in-Class CRM & Commerce Maria Ramirez , Associate CRM Presales Consultant, EPC CLICK HERE CRM / Automate Business Rules for Maximum Efficiency with OPA (Oracle Policy Automation) Marco Nilo, Associate CRM Presales Consultant, EPC CLICK HERE CRM / InQuira Toby Baker, Principal Sales Consultant, CRM Product Specialist Team CLICK HERE EPM / Business Intelligence Foundation Suite – Sales and Product Updates Liviu Nitescu, Senior BI Sales Consultant, EPC CLICK HERE EPM / Hyperion Planning 11.1.2.1 - Sales & Product Updates Andreea Voinea, EPM Sales Consultant, EPC CLICK HERE ERP / JDE EnterpriseOne Fulfillment Management Overview Mirela Andreea Nasta , ERP Presales Consultant, EPC CLICK HERE ERP / Spotlights on iExpenses Elena Nita ,ERP Presales Consultant, EPC CLICK HERE MDM / Master Data Management Martin Boyd , Senior Director Product Strategy CLICK HERE Product break through session Fusion Applications Human Capital Management Rosie Warner , Director, HCM Sales Development CLICK HERE Recent Innovations in Industries Value Proposition presentations January the 16th 2012 Speaker To Register Process Modernisation Iemke Idsingh Public Sector Solutions Director CLICK HERE Shared Services Ann Smith Business Development Director, Shared Services CLICK HERE Strengthening Financial Discipline Whilst Delivering Cashable Savings Philippa Headley UK Sales Development Director Public Sector - EPM Solutions CLICK HERE Social Welfare Industry Solutions Christian Wernberg-Tougaard Industry Director - Social Welfare CLICK HERE Police Industry Solutions Jeff Penrose Solution Sales Director CLICK HERE Tax and Revenue Management Industry Solutions Andre van der Post Global Director - Tax Solutions and Strategy CLICK HERE  

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • IndyTechFest Recap

    - by Johnm
    The sun had yet to raise above the horizon on Saturday, May 22nd and I was traveling toward the location of the 2010 IndyTechFest. In my freshly awaken, and pre-coffee, state I reflected on the months that preceded this day and how quickly they slipped away. The big day had finally come and the morning dew glistened with a unique brightness that morning. What is this all about? For those who are unfamiliar with IndyTechFest, it is a regional conference held in Indianapolis and hosted by the Indianapolis .NET Developers Association (IndyNDA) and the Indianapolis Professional Association for SQL Server (IndyPASS).  The event presents multiple tracks and sessions covering subjects such as Business Intelligence,  Database Administration, .NET Development, SharePoint Development, Windows Mobile Development as well as non-Microsoft topics such as Lean and MongoDB. This year's event was the third hosting of IndyTechFest. No man is an island No event such as IndyTechFest is executed by a single person. My fellow co-founders, with their highly complementary skill sets and philanthropy make the process very enjoyable. Our amazing volunteers and their aid were indispensible. The generous financial support of our sponsors that made the event and fabulous prizes possible. The spectacular line up of speakers who came from near and far to donate their time and knowledge. Our beloved attendees who sacrificed the first sunny Saturday in weeks to expand their skill sets and network with their peers. We are deeply appreciative. Challenges in preparation With the preparation of any event comes challenges. It is these challenges that makes the process of planning an event so interesting. This year's largest challenge was the location of the event. In the past two years IndyTechFest was held at the Gene B. Glick Junior Achievement Center in Indianapolis. This facility has been the hub of the Indy technical community for many years. As the big day drew near, the facility's availability came into question due to some recent changes that had occurred with those who operated the facility. We began our search for an alternative option. Thankfully, the Marriott Indianapolis East was available, was very spacious and willing to work within the range of our budget. Within days of our event, the decision to move proved to be wise since the prior location had begun renovations to the interior. Whew! Always trust your gut. Every day it's getting better At the ending of each year, we huddle together, review the evaluations and identify an area in which the event could improve. This year's big opportunity for improvement resided in the prize give-away portion at the end of the day. In the 2008 event, admittedly, this portion was rather chaotic, rushed and disorganized. This year, we broke the drawing into two sections, of which each attendee received two tickets. The first ticket was a drawing for the mountain of books that were given away. The second ticket was a drawing for the big prizes, the 2 Xboxes, 3 laptops and iPad. We peppered the ticket drawings with gift card raffles and tossing t-shirts into the audience. If at first you don't succeed, try and try again Each year of IndyTechFest, we have offered a means for ad-hoc sessions or discussion groups to pop-up. To our disappointment it was something that never quite took off. We have always believed that this unique type of session was valuable and wanted to figure out a way to make it work for this year. A special thanks to Alan Stevens, who took on and facilitated the "open space" track and made it an official success. Share with your tweety When the attendee badges were designed we decided to place an emphasis on the attendee's Twitter account as well as the events hash-tag (#IndyTechFest) to encourage some real-time buzz during the day. At the host table we displayed a Twitter feed for all to enjoy. It was quite successful and encouraging use of social media. My badge was missing my Twitter account since it was recently changed. For those who care to follow my rather sparse tweets, my address is @johnnydata. Man, this is one long blog post! All in all it was a very successful event. It is always great to see new faces and meet old friends. The planning for the 2011 IndyTechFest will kick off very soon. We have more capacity for future growth and a truck full of great ideas. Stay tuned!

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • Stumbling Through: Making a case for the K2 Case Management Framework

    I have recently attended a three-day training session on K2s Case Management Framework (CMF), a free framework built on top of K2s blackpearl workflow product, and I have come away with several different impressions for some of the different aspects of the framework.  Before we get into the details, what is the Case Management Framework?  It is essentially a suite of tools that, when used together, solve many common workflow scenarios.  The tool has been developed over time by K2 consultants that have realized they tend to solve the same problems over and over for various clients, so they attempted to package all of those common solutions into one framework.  Most of these common problems involve workflow process that arent necessarily direct and would tend to be difficult to model.  Such solutions could be achieved in blackpearl alone, but the workflows would be complex and difficult to follow and maintain over time.  CMF attempts to simplify such scenarios not so much by black-boxing the workflow processes, but by providing different points of entry to the processes allowing them to be simpler, moving the complexity to a middle layer.  It is not a solution in and of itself, development is still required to tie the pieces together. CMF is under continuous development, both a plus and a minus in that bugs are fixed quickly and features added regularly, but it may be difficult to know which versions are the most stable.  CMF is not an officially supported K2 product, which means you will not get technical support but you will get access to the source code. The example given of a business process that would fit well into CMF is that of a file cabinet, where each folder in said file cabinet is a case that contains all of the data associated with one complaint/customer/incident/etc. and various users can access that case at any time and take one of a set of pre-determined actions on it.  When I was given that example, my first thought was that any workflow I have ever developed in the past could be made to fit this model there must be more than just this model to help decide if CMF is the right solution.  As the training went on, we learned that one of the key features of CMF is SharePoint integration as each case gets a SharePoint site created for it, and there are a number of excellent web parts that can be used to design a portal for users to get at all the information on their cases.  While CMF does not require SharePoint, without it you will be missing out on a huge portion of functionality that CMF offers.  My opinion is that without SharePoint integration, you may as well write your workflows and other components the old fashioned way. When I heard that each case gets its own SharePoint site created for it, warning bells immediately went off in my head as I felt that depending on the data load, a CMF enabled solution could quickly overwhelm SharePoint with thousands of sites so we have yet another deciding factor for CMF:  Just how many cases will your solution be creating?  While it is not necessary to use the site-per-case model, it is one of the more useful parts of the framework.  Without it, you are losing a big chunk of what CMF has to offer. When it comes to developing on top of the Case Management Framework, it becomes a matter of configuring what makes up a case, what can be done to a case, where each action on a case should take the user, and then typing up actions to case statuses.  This last step is one that I immediately warmed up to, as just about every workflow Ive designed in the past needed some sort of mapping table to set the status of a work item based on the action being taken definitely one of those common solutions that it is good to see rolled up into a re-useable entity (and it gets a nice configuration UI to boot!).  This concept is a little different than traditional workflow design, in that you dont have to think of an end-to-end process around passing a case along a path, rather, you must envision the case as central object with workflow threads branching off of it and doing their own thing with the case data.  Certainly there can be certain workflow threads that get rather complex, but the idea is that they RELATE to the case, they dont BECOME the case (though it is still possible with action->status mappings to prevent certain actions in certain cases, so it isnt always a wide-open free for all of actions on a case). I realize that this description of the Case Management Framework merely scratches the surface on what the product actually can do, and I dont think Ive conclusively defined for what sort of business scenario you can make a case for Case Management Framework.  What I do hope to have accomplished with this post is to raise awareness of CMF there is a (free!) product out there that could potentially simplify a tangled workflow process and give (for free!) a very useful set of SharePoint web parts and a nice set of (free!) reports.  The best way to see if it will truly fit your needs is to give it a try did I mention it is FREE?  Er, ok, so it is free, but only obtainable at this time for K2 partnersDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >