Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 53/679 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Java Spotlight Episode 108: Patrick Curran and Heather VanCura on JCP.Next @jcp_org

    - by Roger Brinkley
    Interview with Patrick Curran and Heather VanCura on JCP.Next. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Welcome to the newly merged JCP EC! The November/December issue of Java Magazine is now out Red Hat announces intent to contribute to OpenJFX New OpenJDK JEPs: JEP 168: Network Discovery of Manageable Java Processes JEP 169: Value Objects Java EE 7 Survey Latest Java EE 7 Status GlassFish 4.0 Embedded (via @agoncal) Events Nov 13-17, Devoxx, Antwerp, Belgium Nov 20, JCP Public Meeting (see details below) Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewPatrick Curran is Chair of the Java Community Process organization. In this role he oversees the activities of the JCP's Program Management Office including evolving the process and the organization, managing its membership, guiding specification leads and experts through the process, chairing Executive Committee meetings, and managing the JCP.org web site.Patrick has worked in the software industry for more than 25 years, and at Sun and then Oracle for 20 years. He has a long-standing record in conformance testing, and before joining the JCP he led the Java Conformance Engineering team in Sun's Client Software Group. He was also chair of Sun's Conformance Council, which was responsible for defining Sun's policies and strategies around Java conformance and compatibility.Patrick has participated actively in several consortia and communities including the W3C (as a member of the Quality Assurance Working Group and co-chair of the Quality Assurance Interest Group), and OASIS (as co-chair of the Test Assertions Guidelines Technical Committee). Patrick's blog is here.Heather VanCura manages the JCP Program Office and is responsible for the day-to-day nurturing, support, and leadership of the community. She oversees the JCP.org web site, JSR management and posting, community building, events, marketing, communications, and growth of the membership through new members and renewals.  Heather has a front row seat for studying trends within the community and recommending changes. Several changes to the program in recent years have included enabling broader participation, increased transparency and agility in JSR development.  When Heather joined the PMO staff in a community building marketing manager role for the JCP program, she was responsible for establishing the JCP brand logo programs, the JCP.org site, and engaging the community in online surveys and usability studies. She also developed marketing reward programs,  campaigns, sponsorships, and events for the JCP program, including the community gathering at the annual JavaOne Conference.   Before arriving at the JCP community in 2000, Heather worked with various technology companies.  Heather enjoys speaking at conferences, such as Devoxx, Java Zone, and the JavaOne Conferences. She maintains the JCP Blog, Twitter feed (@jcp_org) and Facebook page.  Heather resides in the San Francisco Bay Area, California USA. JCP Executive Committee Public Meeting Details Date & Time Tuesday November 20, 2012, 3:00 - 4:00 pm PST Location Teleconference Dial-in +1 (866) 682-4770 Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password: JCPEC Agenda JSR 355 (the EC merge) implementation report JSR 358 (JCP.next.3) status report 2.8 status update and community audit program Discussion/Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate. September 2012 EC meeting PMO report with JCP 2.8 statistics.JSR 358 Project page What’s Cool Sweden: Hot Java in the Winter GE Engergy using Invoke Daynamic for embedded development

    Read the article

  • Múltiples clientes en el mismo Oracle UCM

    - by [email protected]
    Estamos muy activos con la implantación de plataformas ECM que den servicio a múltiples clientes. Consiguiendo 2 objetivos muy importantes:El cliente final puede pagar al proveedor por una plataformas ECM como servicio (SaaS). Y, lógicamente, se ahorra en complejidad y gastos de infraestructura, administración, formación, almacenamiento, etc...Hemos estado explicando estos días el modelo Master-Proxy de Oracle UCM con el que podemos implantar este tipo de plataformas. No siempre será la solución más adecuada porque a veces vamos a querer disponer de plataformas compartidas, pero con clientes completamente aislados. Siempre, la consola de migración nos permite exportar e importar componentes, metadatos, contenidos, workflows, etc... para que elijamos el modelo más adecuado para cada caso.Pero, ¿Cómo funciona?. Podéis ver en la imágen que se basa en la instalación de varias instancias de UCM y configurarlas de forma que varias de ellas se comporten como "Master" (digo varias para conseguir alta disponibilidad), y el resto se comporten como "Proxy" (también varias instancias de UCM pueden comportarse como un mismo "Proxy" permitiendo balancear la carga en función de que cada cliente requiera más o menos rendimiento). Esta configuración (que vemos en la imágen adjunta), nos permite:Delegar la gestión de usuarios de cada cliente. Los usuarios del Master podrán acceder a todos los Proxies, pero los usuarios de cada proxy sólo acceden a su repositorio.Delegar funcionalidad y componentes. Es posible configurar diferentes funcionalidades en cada proxy de forma que algunos servidores estén especializados en Web Content Management, otros en Document Management (por ejemplo).Diferentes modelos de metadatos. Podemos modelar unos tipos documentales generales para toda la plataforma y otros particulares diferentes en cada UCM "Proxy".Conseguir una centralización de búsquedas y acceso a repositorios de documentación con diferentes juegos de caracteres. Un UCM Master puede centralizar la búsqueda en UCM's proxy que alberguen documentación en diferentes juegos de caracteres (por ejemplo un UCM para documentación de idiomas "Western European" (inglés, español, francés, alemán,...) y otro UCM proxy bajo juego de caracteres "Asian" (japones, coreano, chino,...).Fuentes:Toda la información detallada se encuentra en la documentación de Oracle UCM, aquí:http://download.oracle.com/docs/cd/E10316_01/ouc.htmY en concreto, lo relativo a plataformas, en el documento "Planning and Implementation Guide", aquí:plan_implement_guide_10en.pdf

    Read the article

  • MySQL Connect: What to Expect From the Wondrous Land of MySQL Cluster

    - by Mat Keep
    The MySQL Connect conference is only a couple of weeks away, with MySQL engineers, support teams, consultants and community aces busy putting the final touches to their talks. There will be many exciting new announcements and sharing of best practices at the conference, covering the range of MySQL technologies. MySQL Cluster will a big part of this, so I wanted to share some key sessions for those of you who plan on attending, as well as some resources for those who are not lucky enough to be able to make the trip, but who can't afford to miss the key news. Of course, this is no substitute to actually being there….and the good news is that registration is still open ;-) Roadmap: Whats New in MySQL Cluster Saturday 29th, 1300-1400, in Golden Gate room 5.                                                                                        Bernd Ocklin, director of MySQL Cluster development, and myself will be taking a look at what follows the latest MySQL Cluster 7.2 release. I don't want to give to much away - lets just say its not often you can add powerful new functionality to a product while at the same time making life radically simpler for its users. For those not making it to the Conference, a live webinar repeating the talk is scheduled for Thursday 25th October at 09.00 pacific time. Hold the date, registration will be open for that soon and published to our MySQL Webinars page Best Practices Getting Started with MySQL Cluster, Hands-On Lab Saturday 29th, 1600-1700, in Plaza Room A.                                                              Santo Leto, one of our lead MySQL Cluster support engineers, regularly works with users new to MySQL Cluster, assisting them in installation, configuration, scaling, etc. In this lab, Santo will share best-practices in getting started. Delivering Breakthrough Performance with MySQL Cluster Saturday 29th, 1730-1830, in Golden Gate room 5. Frazer Clement, lead MySQL Cluster software engineer, will demonstrate how to translate the awesome Cluster benchmarks (remember 1 BILLION UPDATEs per minute ?!) into real-world performance. You can also get some best practices from our new MySQL Cluster performance guide  MySQL Cluster BoF Saturday 29th, 1900-2000, room Golden Gate 5.                                                                                                           Come and get a demonstration of new tools for the installation and configuration of MySQL Cluster, and spend time with the engineering team discussing any questions or issues you may have. Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster Sunday 30th, 1145 - 1245, in Golden Gate room 7.   In this session, JD Duncan and Andrew Morgan will present how to get started with both Memcached and new NoSQL APIs. JD and I recently ran a webinar demonstrating how to build simple Twitter-like services with Memcached and MySQL Cluster. The replay is available for download.  Case Studies: MySQL Cluster @ El Chavo, Latin America’s #1 Facebook Game Sunday 30th, 1745 - 1845, in Golden Gate room 4.                             Playful Play deployed MySQL Cluster CGE to power their market leading social game. This session will discuss the challenges they faced, why they selected MySQL Cluster and their experiences to date. You can read more about Playful Play and MySQL Cluster here  A Journey into NoSQLand: MySQL’s NoSQL Implementation Sunday 30th, 1345 - 1445, in Golden Gate room 4.                                          Lig Turmelle, web DBA at Kaplan Professional and esteemed Oracle Ace, will discuss her experiences working with the NoSQL interfaces for both MySQL Cluster and InnoDB Evaluating MySQL HA Alternatives Saturday 29th, 1430-1530, room Golden Gate 5                                                                                   Henrik Ingo, former member of the MySQL sales engineering team, will provide an overview of various HA technologies for MySQL, starting with replication, progressing to InnoDB, Galera and MySQL Cluster What about the other stuff? Of course MySQL Connect has much, much more than MySQL Cluster. There will be lots on replication (which I'll blog about soon), MySQL 5.6, InnoDB, cloud, etc, etc. Take a look at the full Content Catalog to see more. If you are attending, I hope to see you at one of the Cluster sessions...and remember, registration is still open

    Read the article

  • Consolidate Data in Private Clouds, But Consider Security and Regulatory Issues

    - by Troy Kitch
    The January 13 webcast Security and Compliance for Private Cloud Consolidation will provide attendees with an overview of private cloud computing based on Oracle's Maximum Availability Architecture and how security and regulatory compliance affects implementations. Many organizations are taking advantage of Oracle's Maximum Availability Architecture to drive down the cost of IT by deploying private cloud computing environments that can support downtime and utilization spikes without idle redundancy. With two-thirds of sensitive and regulated data in organizations' databases private cloud database consolidation means organizations must be more concerned than ever about protecting their information and addressing new regulatory challenges. Join us for this webcast to learn about greater risks and increased threats to private cloud data and how Oracle Database Security Solutions can assist in securely consolidating data and meet compliance requirements. Register Now.

    Read the article

  • Oracle HRMS API – Create or Update Employee Salary

    - by PRajkumar
    API --  hr_maintain_proposal_api.cre_or_upd_salary_proposal   Note - Salary Basis is required to be assigned to employee assignment before to run Salary Proposal API Example --   DECLARE     lb_inv_next_sal_date_warning      BOOLEAN;     lb_proposed_salary_warning         BOOLEAN;     lb_approved_warning                       BOOLEAN;     lb_payroll_warning                            BOOLEAN;     ln_pay_proposal_id                           NUMBER;     ln_object_version_number                NUMBER; BEGIN    -- Create or Upadte Employee Salary Proposal    -- ----------------------------------------------------------------     hr_maintain_proposal_api.cre_or_upd_salary_proposal     (    -- Input data elements          -- ------------------------------          p_business_group_id                   => fnd_profile.value('PER_BUSINESS_GROUP_ID'),          p_assignment_id                            => 33561,          p_change_date                                => TO_DATE('13-JUN-2011'),          p_proposed_salary_n                   => 1000,          p_approved                                      => 'Y',          -- Output data elements          -- --------------------------------          p_pay_proposal_id                       => ln_pay_proposal_id,          p_object_version_number           => ln_object_version_number,            p_inv_next_sal_date_warning  => lb_inv_next_sal_date_warning,          p_proposed_salary_warning     => lb_proposed_salary_warning,          p_approved_warning                   => lb_approved_warning,          p_payroll_warning                        => lb_payroll_warning     );    COMMIT; EXCEPTION        WHEN OTHERS THEN                           ROLLBACK;                           dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • Windows Azure Roles stuck in &lsquo;initializing&rsquo;, &rsquo;busy&rsquo;

    - by kaleidoscope
    Technorati Tags: windows azure,roles,stuck,initializing,busy,stopping,tinu If you have worked on Windows Azure you are bound to have faced this gnawing and dreaded issue – Your Web/Worker role goes from ‘initializing’ to ‘busy’ to ‘stopping’ but refuses to get ‘ready’. For those of us who have resorted to merciless desktop vandalism over this, there is still hope. In his post MSFT’s Toddy Mladenov summarizes few plausible reasons for this - 1. Missing runtime dependencies (DLLs) 2. Incorrect platform version of a DLL 3. Incorrect DiagnosticsConnectionString/DataConnectionString 4. Queues/Tables being read during initialization do not exist. 5. Certificate without exportable private key. 6. Returning from Run Method in Worker Role. For a more detailed and precise account visit his post: http://blog.toddysm.com/2010/01/windows-azure-deployment-stuck-in-initializing-busy-stopping-why.html - Tinu, O

    Read the article

  • Enterprise Library Logging / Exception handling and Postsharp

    - by subodhnpushpak
    One of my colleagues came-up with a unique situation where it was required to create log files based on the input file which is uploaded. For example if A.xml is uploaded, the corresponding log file should be A_log.txt. I am a strong believer that Logging / EH / caching are cross-cutting architecture aspects and should be least invasive to the business-logic written in enterprise application. I have been using Enterprise Library for logging / EH (i use to work with Avanade, so i have affection towards the library!! :D ). I have been also using excellent library called PostSharp for cross cutting aspect. Here i present a solution with and without PostSharp all in a unit test. Please see full source code at end of the this blog post. But first, we need to tweak the enterprise library so that the log files are created at runtime based on input given. Below is Custom trace listner which writes log into a given file extracted out of Logentry extendedProperties property. using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners; using Microsoft.Practices.EnterpriseLibrary.Logging; using System.IO; using System.Text; using System; using System.Diagnostics;   namespace Subodh.Framework.Logging { [ConfigurationElementType(typeof(CustomTraceListenerData))] public class LogToFileTraceListener : CustomTraceListener {   private static object syncRoot = new object();   public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data) {   if ((data is LogEntry) & this.Formatter != null) { WriteOutToLog(this.Formatter.Format((LogEntry)data), (LogEntry)data); } else { WriteOutToLog(data.ToString(), (LogEntry)data); } }   public override void Write(string message) { Debug.Print(message.ToString()); }   public override void WriteLine(string message) { Debug.Print(message.ToString()); }   private void WriteOutToLog(string BodyText, LogEntry logentry) { try { //Get the filelocation from the extended properties if (logentry.ExtendedProperties.ContainsKey("filelocation")) { string fullPath = Path.GetFullPath(logentry.ExtendedProperties["filelocation"].ToString());   //Create the directory where the log file is written to if it does not exist. DirectoryInfo directoryInfo = new DirectoryInfo(Path.GetDirectoryName(fullPath));   if (directoryInfo.Exists == false) { directoryInfo.Create(); }   //Lock the file to prevent another process from using this file //as data is being written to it.   lock (syncRoot) { using (FileStream fs = new FileStream(fullPath, FileMode.Append, FileAccess.Write, FileShare.Write, 4096, true)) { using (StreamWriter sw = new StreamWriter(fs, Encoding.UTF8)) { Log(BodyText, sw); sw.Close(); } fs.Close(); } } } } catch (Exception ex) { throw new LoggingException(ex.Message, ex); } }   /// <summary> /// Write message to named file /// </summary> public static void Log(string logMessage, TextWriter w) { w.WriteLine("{0}", logMessage); } } }   The above can be “plugged into” the code using below configuration <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="Trace" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.CustomTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Subodh.Framework.Logging.LogToFileTraceListener, Subodh.Framework.Logging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Subodh Custom Trace Listener" initializeData="" formatter="Text Formatter" /> </listeners> Similarly we can use PostSharp to expose the above as cross cutting aspects as below using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; using PostSharp.Laos; using System.Diagnostics; using GC.FrameworkServices.ExceptionHandler; using Subodh.Framework.Logging;   namespace Subodh.Framework.ExceptionHandling { [Serializable] public sealed class LogExceptionAttribute : OnExceptionAspect { private string prefix; private MethodFormatStrings formatStrings;   // This field is not serialized. It is used only at compile time. [NonSerialized] private readonly Type exceptionType; private string fileName;   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception flowing out of the methods to which /// the custom attribute is applied. /// </summary> public LogExceptionAttribute() { }   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception derived from a given <see cref="Type"/> /// flowing out of the methods to which /// the custom attribute is applied. /// </summary> /// <param name="exceptionType"></param> public LogExceptionAttribute( Type exceptionType ) { this.exceptionType = exceptionType; }   public LogExceptionAttribute(Type exceptionType, string fileName) { this.exceptionType = exceptionType; this.fileName = fileName; }   /// <summary> /// Gets or sets the prefix string, printed before every trace message. /// </summary> /// <value> /// For instance <c>[Exception]</c>. /// </value> public string Prefix { get { return this.prefix; } set { this.prefix = value; } }   /// <summary> /// Initializes the current object. Called at compile time by PostSharp. /// </summary> /// <param name="method">Method to which the current instance is /// associated.</param> public override void CompileTimeInitialize( MethodBase method ) { // We just initialize our fields. They will be serialized at compile-time // and deserialized at runtime. this.formatStrings = Formatter.GetMethodFormatStrings( method ); this.prefix = Formatter.NormalizePrefix( this.prefix ); }   public override Type GetExceptionType( MethodBase method ) { return this.exceptionType; }   /// <summary> /// Method executed when an exception occurs in the methods to which the current /// custom attribute has been applied. We just write a record to the tracing /// subsystem. /// </summary> /// <param name="context">Event arguments specifying which method /// is being called and with which parameters.</param> public override void OnException( MethodExecutionEventArgs context ) { string message = String.Format("{0}Exception {1} {{{2}}} in {{{3}}}. \r\n\r\nStack Trace {4}", this.prefix, context.Exception.GetType().Name, context.Exception.Message, this.formatStrings.Format(context.Instance, context.Method, context.GetReadOnlyArgumentArray()), context.Exception.StackTrace); if(!string.IsNullOrEmpty(fileName)) { ApplicationLogger.LogException(message, fileName); } else { ApplicationLogger.LogException(message, Source.UtilityService); } } } } To use the above below is the unit test [TestMethod] [ExpectedException(typeof(NotImplementedException))] public void TestMethod1() { MethodThrowingExceptionForLog(); try { MethodThrowingExceptionForLogWithPostSharp(); } catch (NotImplementedException ex) { throw ex; } }   private void MethodThrowingExceptionForLog() { try { throw new NotImplementedException(); } catch (NotImplementedException ex) { // create file and then write log ApplicationLogger.TraceMessage("this is a trace message which will be logged in Test1MyFile", @"D:\EL\Test1Myfile.txt"); ApplicationLogger.TraceMessage("this is a trace message which will be logged in YetAnotherTest1Myfile", @"D:\EL\YetAnotherTest1Myfile.txt"); } }   // Automatically log details using attributes // Log exception using attributes .... A La WCF [FaultContract(typeof(FaultMessage))] style] [Log(@"D:\EL\Test1MyfileLogPostsharp.txt")] [LogException(typeof(NotImplementedException), @"D:\EL\Test1MyfileExceptionPostsharp.txt")] private void MethodThrowingExceptionForLogWithPostSharp() { throw new NotImplementedException(); } The good thing about the approach is that all the logging and EH is done at centralized location controlled by PostSharp. Of Course, if some other library has to be used instead of EL, it can easily be plugged in. Also, the coder ARE ONLY involved in writing business code in methods, which makes code cleaner. Here is the full source code. The third party assemblies provided are from EL and PostSharp and i presume you will find these useful. Do let me know your thoughts / ideas on the same. Technorati Tags: PostSharp,Enterprize library,C#,Logging,Exception handling

    Read the article

  • Oracle Functional Testing Suite Advanced Pack for Oracle EBS Now Available

    - by Anne Carlson (Oracle Development)
    There’s new news about automated testing of E-Business Suite using the Oracle Application Testing Suite, a.k.a, “OATS”. E-Business Suite Development is pleased to announce the availability of the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite. The new pack, available with the latest release of Oracle Application Testing Suite (12.4.0.2), provides pre-built test components and flows to automate the in-depth testing of Oracle E-Business Suite applications. Designed for use with the Oracle Application Testing Suite and its Oracle Flow Builder capability, these pre-built components and flows can help Oracle E-Business Suite customers to significantly reduce the time and effort needed to create and maintain automated test scripts. The Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite is available now for EBS 12.1.3, and availability for EBS 12.2 is planned. Some Background on Automating Testing with Oracle Application Testing Suite and Oracle Flow Builder      Testing complex packaged applications like Oracle E-Business Suite can be time-consuming and challenging for organizations, hampering their ability to upgrade to latest releases or apply latest patches. Oracle Application Testing Suite offers organizations a unique and powerful testing platform for Oracle E-Business Suite and other Oracle applications. With the 12.3.0.1 release of Oracle Application Testing Suite, we introduced the Oracle Flow Builder testing framework and accompanying starter pack of pre-built test components and flows. The starter pack, which contains over 2000 components and 200 flows, provides broad coverage of commonly-used base functionality and is designed to jump-start the test automation effort. Using Oracle Flow Builder, even non-technical testers can create working test scripts using the pre-built components that Oracle provides. Each component represents an atomic test operation such as “create an invoice batch” or “apply an invoice hold.” Testers can assemble the pre-built components into test flows, and combine test flows with spreadsheet data to drive the testing of multiple data conditions. The Oracle Flow Builder framework allows customers to add, modify and extend the pre-built components to address new functionality and customizations of the Oracle E-Business Suite. Using Oracle Flow Builder’s component-based test generation framework instead of a traditional record/playback approach has allowed the EBS Quality Assurance team to reduce their test automation effort by 60%. E-Business Suite customers can significantly reduce their test automation effort using Oracle Application Testing Suite with Oracle Flow Builder and the pre-built test components and flows that Oracle provides. Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite Improves Test Coverage With the Oracle Application Testing Suite 12.4.0.2 and the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite, we are now delivering a significant number of additional test components and flows beyond those contained in the Oracle Flow Builder starter pack. These additional test components and flows provide 70-80% test coverage and enable the automation of detailed and complex test flows across the following Oracle E-Business Suite products: Oracle Asset Lifecycle Management Oracle Channel Revenue Management Oracle Discrete Manufacturing Oracle Incentive Compensation Oracle Lease and Finance Management Oracle Process Manufacturing Oracle Procurement Oracle Project Management Oracle Property Manager Oracle Service Downloads You can download the Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite from the Oracle Technology Network. References Oracle Applications Testing Suite YouTube: Oracle Flow Builder Training YouTube: Oracle Applications Testing Suite and Flow Builder Demonstration Oracle Functional Testing Suite Advanced Pack Readme for E-Business Suite, id=1905989.1">Note 1905989.1 Related Articles Automate Testing Using Oracle Application Testing Suite with Flow Builder for E-Business Suite EBS 12.1.1 Test Starter Kit Now Available for Oracle Applications Testing Suite Oracle Application Testing Suite 9.0 Supported with Oracle E-Business Suite Using the Oracle Application Testing Suite with EBS: Interim Update #1

    Read the article

  • Upgrading Windows 8 boot to VHD to Windows 8.1&ndash;Step by step guide

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/19/upgrading-windows-8-boot-to-vhd-to-windows-8.1ndashstep-by.aspxBoot to VHD – dual booting Windows 7 and Windows 8 became easy When Windows 8 arrived, quite a few people decided that they would still dual boot their machines, and instead of mucking about with resizing disk partitions to free up space for Windows 8 they decided to use the boot from VHD feature to create a huge hard disc image into which Windows 8 could be installed.  Scott Hanselman wrote this installation guide, while I myself used the installation guide from Ed Bott of ZD net fame. Boot to VHD is a great solution, it achieves a dual boot, can be backed up easily and had virtually no effect on the original Windows 7 partition. As a developer who has dual booted Windows operating systems for years, hacking boot.ini files, the boot to VHD was a much easier solution. Upgrade to Windows 8.1 – ah, you can’t do that on a virtual disk installation (boot to VHD) Last week the final version of Windows 8.1 arrived, and I went into the Windows Store to upgrade.  Luckily I’m on a fast download service, and use an SSD, because once the upgrade was downloaded and prepared Windows informed that This PC can’t run Windows 8.1, and provided the reason, You can’t install Windows on a virtual drive.  You can see an image of the message and discussion that sparked my search for a solution in this Microsoft Technet forum post. I was determined not to have to resize partitions yet again and fiddle with VHD to disk utilities and back again, and in the end I did succeed in upgrading to a Windows 8.1 boot to VHD partition.  It takes quite a bit of effort though … tldr; Simple steps of how you upgrade Boot into Windows 7 – make a copy of your Windows 8 VHD, to become Windows 8.1 Enable Hyper-V in your Windows 8 (the original boot to VHD partition) Create a new virtual machine, attaching the copy of your Windows 8 VHD Start the virtual machine, upgrade it via the Windows Store to Windows 8.1 Shutdown the virtual machine Boot into Windows 7 – use the bcedit tool to create a new Windows 8.1 boot to VHD option (pointing at the copy) Boot into the new Windows 8.1 option Reactivate Windows 8.1 (it will have become deactivated by running under Hyper-V) Remove the original Windows 8 VHD, and in Windows 7 use bcedit to remove it from the boot menu Things you’ll need A system that can run Hyper-V under Windows 8 (Intel i5, i7 class CPU) Enough space to have your original Windows 8 boot to VHD and a copy at the same time An ISO or DVD for Windows 8 to create a bootable Windows 8 partition Step by step guide Boot to your base o/s, the real one, Windows 7. Make a copy of the Windows 8 VHD file that you use to boot Windows 8 (via boot from VHD) – I copied it from a folder on C: called VHD-Win8 to VHD-Win8.1 on my N: drive. Reboot your system into Windows 8, and enable Hyper-V if not already present (this may require reboot) Use the Hyper-V manager , create a new Hyper-V machine, using half your system memory, and use the option to attach an existing VHD on the main IDE controller – this will be the new copy you made in Step 2. Start the virtual machine, use Connect to view it, and you’ll probably discover it cannot boot as there is no boot record If this is the case, go to Hyper-V manager, edit the Settings for the virtual machine to attach an ISO of a Windows 8 DVD to the second IDE controller. Start the virtual machine, use Connect to view it, and it should now attempt a fresh installation of Windows 8.  You should select Advanced Options and choose Repair - this will make VHD bootable When the setup reboots your virtual machine, turn off the virtual machine, and remove the ISO of the Windows 8 DVD from the virtual machine settings. Start virtual machine, use Connect to view it.  You will see the devices to be re-discovered (including your quad CPU becoming single CPU).  Eventually you should see the Windows Login screen. You may notice that your desktop background (Win+D) will have turned black as your Windows installation has become deactivate due to the hardware changes between your real PC and Hyper-V. Fortunately becoming deactivated, does not stop you using the Windows Store, where you can select the update to Windows 8.1. You can now watch the progress joy of the Windows 8 update; downloading, preparing to update, checking compatibility, gathering info, preparing to restart, and finally, confirm restart - remember that you are restarting your virtual machine sitting on the copy of the VHD, not the Windows 8 boot to VHD you are currently using to run Hyper-V (confused yet?) After the reboot you get the real upgrade messages; setting up x%, xx%, (quite slow) After a while, Getting ready Applying PC Settings x%, xx% (really slow) Updating your system (fast) Setting up a few more things x%, (quite slow) Getting ready, again Accept license terms Express settings Confirmed previous password Next, I had to set up a Microsoft account – which is possibly now required, and not optional Using the Microsoft account required a 2 factor authorization, via text message, a 7 digit code for me Finalising settings Blank screen, HI .. We're setting up things for you (similar to original Windows 8 install) 'You can get new apps from the Store', below which is ’Installing your apps’ - I had Windows Media Center which is counts as an app from the Store ‘Taking care of a few things’, below which is ‘Installing your apps’ ‘Taking care of a few things’, below ‘Don't turn off your PC’ ‘Getting your apps ready’, below ‘Don't turn off your PC’ ‘Almost ready’, below ‘Don't turn off your PC’ … finally, we get the Windows 8.1 start menu, and a quick Win+D to check the desktop confirmed all the application icons I expected, pinned items on the taskbar, and one app moaning about a missing drive At this point the upgrade is complete – you can shutdown the virtual machine Reboot from the original Windows 8 and return to Windows 7 to configure booting to the Windows 8.1 copy of the VHD In an administrator command prompt do following use the bcdedit tool (from an MSDN blog about configuring VHD to boot in Windows 7) Type bcedit to list the current boot options, so you can copy the GUID (complete with brackets/braces) for the original Windows 8 boot to VHD Create a new menu option, copy of the Windows 8 option; bcdedit /copy {originalguid} /d "Windows 8.1" Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} device vhd=[D:]\Image.vhd Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd Set autodetection of the HAL (may already be set); bcdedit /set {newguid} detecthal on Reboot from Windows 7 and select the new option 'Windows 8.1' on the boot menu, and you’ll have some messages to look at, as your hardware is redetected (as you are back from 1 CPU to 4 CPUs) ‘Getting devices ready, blank then %xx, with occasional blank screen, for the graphics driver, (fast-ish) Getting Ready message (fast) You will have to suffer one final reboots, choose 'Windows 8.1' and you can now login to a lovely Windows 8.1 start screen running on non virtualized hardware via boot to VHD After checking everything is running fine, you can now choose to Activate Windows, which for me was a toll free phone call to the automated system where you type in lots of numbers to be given a whole bunch of new activation codes. Once you’re happy with your new Windows 8.1 boot to VHD, and no longer need the Windows 8 boot to VHD, feel free to delete the old one.  I do believe once you upgrade, you are no longer licensed to use it anyway. There, that was simple wasn’t it? Looking at the huge list of steps it took to perform this upgrade, you may wonder whether I think this is worth it.  Well, I think it is worth booting to VHD.  It makes backups a snap (go to Windows 7, copy the VHD, you backed up the o/s) and helps with disk management – want to move the o/s, you can move the VHD and repoint the boot menu to the new location. The downside is that Microsoft has complete neglected to support boot to VHD as an upgradable option.  Quite a poor decision in my opinion, and if you read twitter and the forums quite a few people agree with that view.  It’s a shame this got missed in the work on creating the upgrade packages for Windows 8.1.

    Read the article

  • Oracle Linux Delivers Top CPU Benchmark Results on Sun Blades

    - by sergio.leunissen
    From the Performance and Best Practices blog: Fresh SPEC CPU2006 results for Sun Blade X6275 M2 Server Modules running Oracle Linux 5.5. The highlights: The dual-node Sun Blade X6275 M2 server module, equipped with two Intel Xeon X5670 2.93 GHz processors per node and running the Oracle Enterprise Linux 5.5 operating system delivered the best SPECint_rate2006 and SPECfp_rate2006 benchmark results for all systems with Intel Xeon processor 5000 sequence. With a SPECint_rate2006 benchmark result of 679, the Sun Blade X6275 M2 server module, with two compute nodes per blade, delivers maximum performance for space constrained environments. Comparing Oracle's dual-node blade to HP's dual-node blade server, based on their single node performance, the Sun Blade X6275 M2 server module SPECfp_rate2006 score of 241 outperforms the best published HP ProLiant BL2X220c G5 server score by 3.2x. A single node of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 37% improvement in SPECint_rate2006 benchmark results and 22% improvement in SPECfp_rate2006 benchmark results compared to the previous generation Sun Blade X6275 server module. Both nodes of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 59% improvement on the SPECint_rate2006 benchmark and 40% improvement on the SPECfp_rate2006 benchmark compared to the previous generation Sun Blade X6275 server module.

    Read the article

  • Catch Oracle Today and Tomorrow at Forrester’s Customer Experience Forum 2012 East

    - by Christie Flanagan
    Continuing our coverage of the customer experience revolution this week, don’t miss a chance to catch up with Oracle at Forrester’s Customer Experience Forum 2012 East today and tomorrow in New York City. The theme for this year’s Forum is “Outside In: The Power Of Putting Customers At The Center Of Your Business” and will take a look at important questions surrounding how to transform your company in order to take best advantage of the customer experience revolution: Why is customer experience the greatest untapped source of cost savings and increased revenue today? What is the key to understanding and taking control of your customer experience ecosystem? What are the six essential customer experience disciplines? Which companies have adopted best-in-class customer experience practices? How do customer experience strategies drive differentiating activities and processes at top companies? Which organizations appoint a chief customer officer to lead their customer experience efforts? What is the future of customer experience? How can you design an enterprise wide customer experience? How can you measure the results of your customer experience efforts? As a gold sponsor of the event, there will be a numbers of ways to interact with Oracle while you’re attending the Forum.  Here are some of the highlights:Oracle Speaking SessionTuesday, June 26, 2:10pm – 2:40pmThe Customer And YOU — Today’s Winners Are Defined By Customer ExperienceAnthony Lye, Senior Vice President of Customer Relationship Management, OracleCome hear Anthony Lye, Senior Vice President of Customer Relationship Management at Oracle, explain how leading companies are investing in customer experience solutions to enrich all interactions between a customer and their company. He will discuss Oracle's vision for transforming your customer engagement, insight, and execution into a connected, personalized, and rewarding experience across all touchpoints and interactions. He will demonstrate how great customer experiences generate real business results by attracting more customers, retaining more customers, and generating more sales while improving operational efficiency.Solution ShowcaseTuesday, June 26th9:45am - 10:30am - Morning Networking Break in the Solutions Showcase11:45am – 1:15pm - Networking Lunch an Dessert in the Solutions Showcase2:40pm – 3:25pm - Afternoon Break in the Solutions Showcase5:30pm – 7:00pm - Networking Reception in the Solutions ShowcaseWednesday, June 27th9:45am - 10:30am - Morning Networking Break in the Solutions Showcase12:20pm -1:20pm - Networking Lunch and Dessert in the Solutions ShowcaseWe hope to see you there! Webcast: Learn How Ancestry.com Delivers Exceptional Online Customer Experience with Oracle WebCenterDate: Thursday, June 28, 2012Time: 10:00 AM PDT/ 1:00 PM EDT Ancestry.com is the world’s largest online family history resource, providing an engaging customer experience to more than 1.7 million members. With a wealth of learning resources and a worldwide community of family history enthusiasts, Ancestry.com helps people discover their roots and tell their family stories. Key to Ancestry.com’s success has been the delivery of an online customer experience that converts site visitors into paying subscribers and keeps them coming back. Register now to learn how Ancestry.com delivers an exception customer experience using Oracle WebCenter Sites. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Installing Visual Studio Team Foundation Server Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. Although I had no errors on my main computer, my netbook did have problems. Although I am not ready to call it a Service Pack problem just yet! Update 2011-03-10 – Running the Team Foundation Server 2010 Service Pack 1 install a second time worked As per Brian's post I am installing the Team Foundation Server Service Pack first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. This however does not affect you if you are running Visual Studio and Team Foundation Server on separate computers as is normal in a production deployment. Main workhorse I will be installing the service pack first on my main computer as I want to actually use it here. Figure: My main workhorse I will also be installing this on my netbook which is obviously of significantly lower spec, but I will do that one after. Although, as always I had my fingers crossed, I was not really worried. Figure: KB2182621 Compared to Visual Studio there are not really a lot of components to update. Figure: TFS 2010 and SQL 2008 are the main things to update There is no “web” installer for the Team Foundation Server 2010 Service Pack, but that is ok as most people will be installing it on a production server and will want to have everything local. I would have liked a Web installer, but the added complexity for the product team is not work the capability for a 500mb patch. Figure: There is currently no way to roll SP1 and RTM together Figure: No problems with the file verification, phew Figure: Although the install took a while, it progressed smoothly   Figure: I always like a success screen Well, as far as the install is concerned everything is OK, but what about TFS? Can I still connect and can I still administer it. Figure: Service Pack 1 is reflected correctly in the Administration Console I am confident that there are no major problems with TFS on my system and that it has been updated to SP1. I can do all of the things that I used before with ease, and with the new features detailed by Brian I think I will be happy. Netbook The great god Murphy has stuck, and my poor wee laptop spat the Team Foundation Server 2010 Service Pack 1 out so fast it hit me on the back of the head. That will teach me for not looking… Figure: “Installation did not succeed” I am pretty sure should not be all caps! On examining the file I found that everything worked, except the actual Team Foundation Server 2010 serving step. Action: System Requirement Checks... Action complete Action: Downloading and/or Verifying Items c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp: Verifying signature for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp Signature verified successfully for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi: Verifying signature for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi Signature verified successfully for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi: Verifying signature for DACProjectSystemSetup_enu.msi Exists: evaluating Exists evaluated to false c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi Signature verified successfully for DACProjectSystemSetup_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi: Verifying signature for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi Signature verified successfully for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi: Verifying signature for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi Signature verified successfully for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi: Verifying signature for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi Signature verified successfully for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi: Verifying signature for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi Signature verified successfully for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi: Verifying signature for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi Signature verified successfully for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab: Verifying signature for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab Signature verified successfully for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi: Verifying signature for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi Signature verified successfully for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\SetupUtility.exe: Verifying signature for SetupUtility.exe c:\757fe6efe9f065130d4838081911\SetupUtility.exe Signature verified successfully for SetupUtility.exe c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab: Verifying signature for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab Signature verified successfully for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi: Verifying signature for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi Signature verified successfully for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe: Verifying signature for NDP40-KB2468871.exe c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe Signature verified successfully for NDP40-KB2468871.exe Action complete Action: Performing actions on all Items Entering Function: BaseMspInstallerT >::PerformAction Action: Performing Install on MSP: c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp targetting Product: Microsoft Team Foundation Server 2010 - ENU Returning IDOK. INSTALLMESSAGE_ERROR [Error 1935.An error occurred during the installation of assembly 'Microsoft.TeamFoundation.WebAccess.WorkItemTracking,version="10.0.0.0",publicKeyToken="b03f5f7f11d50a3a",processorArchitecture="MSIL",fileVersion="10.0.40219.1",culture="neutral"'. Please refer to Help and Support for more information. HRESULT: 0x80070005. ] Returning IDOK. INSTALLMESSAGE_ERROR [Error 1712.One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible.] Patch (c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp) Install failed on product (Microsoft Team Foundation Server 2010 - ENU). Msi Log: MSI returned 0x643 Entering Function: MspInstallerT >::Rollback Action Rollback changes PerformMsiOperation returned 0x643 PerformMsiOperation returned 0x643 OnFailureBehavior for this item is to Rollback. Action complete Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:14:09). Figure: Error log for Team Foundation Server 2010 install shows a failure As there is really no information in this log as to why the installation failed so I checked the event log on that box. Figure: There are hundreds of errors and it actually looks like there are more problems than a failed Service Pack I am going to just run it again and see if it was because the netbook was slow to catch on to the update. Hears hoping, but even if it fails, I would question the installation of Windows (PDC laptop original install) before I question the Service Pack Figure: Second run through was successful I don’t know if the laptop was just slow, or what… Did you get this error? If you did I will push this to the product team as a problem, but unless more people have this sort of error, I will just look to write this off as a corrupted install of Windows and reinstall.

    Read the article

  • Parameter _rollback_segment_count can cause trouble

    - by Mike Dietrich
    Just some weeks ago we've learned that setting the hidden underscore parameter: _rollback_segment_count may cause trouble during upgrade. This parameter is used in very rare cases to have under all circumstances and situations this specified number of UNDO's online. Now during upgrade this may result in massive latch contention due to bug14226559 - and there's a patch available as well. Recommendation is to unset it during upgrade. I don't think that many people will hit this as I personally haven't seen databases with this underscore in their init.ora or spfiles. So take this post more or less as a reminder for myself

    Read the article

  • MS Visual Studio 2008 Certified with Oracle EBS 12 on MS Windows Server (32-bit)

    - by Steven Chan
    Microsoft Visual Studio 2008 is now certified with Oracle E-Business Suite Release 12 (12.0.4 or higher, 12.1.1 or higher) as a release maintenance tool. Previously, Microsoft Visual Studio 2005 was required for E-Business Suite Release 12. The editions of Visual Studio 2008 covered by this announcement are:Microsoft Visual Studio 2008 StandardMicrosoft Visual Studio 2008 Professional Microsoft Visual Studio 2008 Team Microsoft Visual C++ 2008 Express (part of Visual Studio 2008 Express Edition) The operating systems supported by Visual Studio 2008 on this platform are:Microsoft Windows Server 2003 (for EBS 12.0.4, 12.1.1) Microsoft Windows Server 2008 (for EBS 12.1.1 only)

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

  • December release of Microsoft All-In-One Code Framework is available now.

    - by Jialiang
    The code samples in Microsoft All-In-One Code Framework are updated on 2010-12-13. Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If it’s the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/,  and this Port25 article http://port25.technet.com/archive/2010/01/18/the-all-in-one-code-framework.aspx.  -------------- New ASP.NET Code Samples VBASPNETAJAXWebChat and CSASPNETAJAXWebChat Most of you have some experience in chatting with friends on the web. So you may want to know how to make a web chat application, it seems to be quite complicated. But ASP.NET gives you the power to buiild a chat room easily. In this code sample, we will construct our own web chat room with the amazing AJAX feature. The principle is simple relatively. As we all know, a base chat application need 4 base controls: one List control to show the chat room members, one List control to show the message list, one TextBox control to input messages and one button to send message. User inputs his message in the textbox first and then presses Send button, it will send the message to the server. The message list will update every 2 seconds to get the newest message list in the chat room from the server. We need to know, it is hard for us to make an AJAX web chat application like a windows form application because we cannot keep the connection after one web request ended. So a lot of events which communicates between client side and server side cannot be realized. The common workaround is to make web requests in every some seconds to check whether the server side has been updated. But another technique called COMET makes it possible. But it is different with AJAX and will not be talked in details in this KB. For more details about COMET, we can get some clues from the Reference.   CSASPNETCurrentOnlineUserList and VBASPNETCurrentOnlineUserList This sample demos a system that needs to display a list of current online users' information. As a matter of fact, Membership.GetNumberOfUsersOnline Method  can get the number of online users and there is a convenient approach to check whether the user is online by using Membership.GetUser(string userName).IsOnline property,however many asp.net projects are not using membership.So in this case,the sample shows how to display a list of current online users' information without using membership provider. It is not difficult to check whether the user is online by using session.Many projects tend to be used “Session_End” event to mark a user as “Offline”,however ,it may not be a good idea,because it can’t detect the user status accurately. In addition, "Session_End" event is only available in the "InProc" session mode. If you are storing session states in the State Server or SQL Server, "Session_End" event will never fire. To handle this issue, we need to save the user online status to a  global DataTable or  DataBase. In the sample application, define a global DataTable to store the information of online users.Use XmlHttpRequest in the pages to update and check user's last active time at intervals and also retrieve information on how many users are still online. The sample project can auto delete offline users' information from a global DataTable by checking users’ last active time. A step-by-step guide illustrating how to display a list of current online users' information without using membership provider: 1. Login page. Let user sign in and add current user’s information to a global datatable while Initialize the global datatable which used to store information of current online users. 2. Current online user list page. Use XmlHttpRequest in this page to update and check user's last active time at intervals and also retrieve information on how many users are still online. 3. If user closes the page without clicking  the sign out link button ,the sample project can auto mark the user as offline and delete offline users' information from a global DataTable which used to store information of current online users  by checking users’ last active time. Then the current online user list will be like this:   CSASPNETIPtoLocation This sample demonstrates how to find the geographical location from an IP address. As we know, it is not hard for us to get the IP address of visitors via Request.ServerVariable property, but it is really difficult for us to know where they come from. To achieve this feature, the sample uses a free third party web service from http://freegeoip.appspot.com/, which returns the information about an IP address we send to the server in the format of XML, JSON or CSV. It makes all things easier.   CSASPNETBackgroundWorker Sometimes we do an operation which needs long time to complete. It will stop the response and the page is blank until the operation finished. In this case, we want the operation to run in the background, and in the page, we want to display the progress of the running operation. Therefore, the user can know the operation is running and can know the progress. CSASPNETInheritingFromTreeNode In windows forms TreeView, each tree node has a property called "Tag" which can be used to store a custom object. Many customers want to implement the same tag feature in ASP.NET TreeView. This project creates a custom TreeView control named "CustomTreeView" to achieve this goal. CSASPNETRemoteUploadAndDownload and VBASPNETRemoteUploadAndDownload This code sample was created in response to a code sample request in our new code sample request frunction for customers. The code samples demonstrate uploading files to and downloading files from a remote HTTP or FTP server. In .NET Framework 2.0 and higher versions, there are some lightweight class libraries which support HTTP and FTP protocol transmission. By using these classes, we can achieve this programming requirement.   CSASPNETImageEditUpload and VBASPNETImageEditUpload This demo will shows how to insert, edit and update a common image with the type of "jpg", "png", "gif" or "bmp" . We mainly use two different SqlDataSources with the same database to bind to GridView and FormView in order to establish the “cascading” effort. Besides we apply our self-made ImageHanlder to encoding or decoding images of different types, and use context to output the stream of images. We will explicitly assign the binary streams of images through the event of “FormView_ItemInserting” or “Form_ItemUpdating” to synchronize the stream both in what we can see on an aspx page as well as in what’s really stored in the database.   WebBrowser Control, Network and other Windows General New Code Samples   CSWebBrowserSuppressError and VBWebBrowserSuppressError The sample demonstrates how to make WebBrowser suppress errors, such as script error, navigation error and so on.   CSWebBrowserWithProxy and VBWebBrowserWithProxy The sample demonstrates how to make WebBrowser use a proxy server.   CSWebDownloadProgress and VBWebDownloadProgress The sample demonstrates how to show progress during the download. It also supplies the features to Start, Pause, Resume and Cancel a download.   CppSetDesktopWallpaper, CSSetDesktopWallpaper and VBSetDesktopWallpaper This code sample application allows you select an image, view a preview (resized smaller to fit if necessary), select a display style among Tile, Center, Stretch, Fit (Windows 7 and later) and Fill (Windows 7 and later), and set the image as the Desktop wallpaper. CSWindowsServiceRecoveryProperty and VBWindowsServiceRecoveryProperty CSWindowsServiceRecoveryProperty example demonstrates how to use ChangeServiceConfig2 to configure the service "Recovery" properties in C#. This example operates all the options you can see on the service "Recovery" tab, including setting the "Enable actions for stops with errors" option in Windows Vista and later operating systems. This example also include how to grant the shut down privilege to the process, so that we can configure a special option in the "Recovery" tab - "Restart Computer Options...".   New Office Development Code Samples   CSOneNoteRibbonAddIn and VBOneNoteRibbonAddIn The code sample demonstrates a OneNote 2010 COM add-in that implements IDTExtensibility2. The add-in also supports customizing the Ribbon by implementing the IRibbonExtensibility interface. It is a skeleton OneNote add-in that developers can extend it to implement more functions. The code sample was requested by a customer in our code sample request service. We expect that this could help developers in the community.   New Windows Shell Code Samples   CppShellExtPreviewHandler, CSShellExtPreviewHandler and VBShellExtPreviewHandler In the past two months, we released the code samples of Windows Context Menu Handler, Infotip Handler, and Thumbnail Handler. This is the fourth part of the shell extension series: Preview Handler. The code samples demo the C++, C# and VB.NET implementation of a preview handler for a new file type registered with the .recipe extension. Preview handlers are called when an item is selected to show a lightweight, rich, read-only preview of the file's contents in the view's reading pane. This is done without launching the file's associated application. Windows Vista and later operating systems support preview handlers. To be a valid preview handler, several interfaces must be implemented. This includes IPreviewHandler (shobjidl.h); IInitializeWithFile, IInitializeWithStream, or IInitializeWithItem (propsys.h); IObjectWithSite (ocidl.h); and IOleWindow (oleidl.h). There are also optional interfaces, such as IPreviewHandlerVisuals (shobjidl.h), that a preview handler can implement to provide extended support. Windows API Code Pack for Microsoft .NET Framework makes the implementation of these interfaces very easy in .NET. The example preview handler provides previews for .recipe files. The .recipe file type is simply an XML file registered as a unique file name extension. It includes the title of the recipe, its author, difficulty, preparation time, cook time, nutrition information, comments, an embedded preview image, and so on. The preview handler extracts the title, comments, and the embedded image, and display them in a preview window.   In response to many customers' request, we added setup projects in every shell extension samples in this release. Those setup projects allow you to deploy the shell extensions to your end users' machines. ---------- Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If you have any feedback for us, please email: [email protected]. We look forward to your comments.

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • New Fusion Community, Community Name Changes and Upcoming Webcasts

    - by cwarticki
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Check out the new MOS Customer Relationship Management (CRM) community. This community has been featured in marketing events and is one of the more active communities so far. Support has also renamed the Fusion HCM community (now Human Capital Management (HCM)) and the Technical – FA community (now Fusion Applications Technology) in order to standardize our naming convention. Finally, we have two upcoming webcasts: 18-OCT-2012 : Fusion Apps Security - User & Role Management using Oracle Identity Manager featured in our Fusion Applications Technology community 01-NOV-2012: Fusion Apps Security – Troubleshoot Data Role Issues featured in our Fusion Applications Technology community. Check out our new Community. Attend our upcoming webcasts. Participate.  Engage. Contribute. ~Chris

    Read the article

  • Oracle UCM Integration with WebCenter

    - by john.brunswick
    Portal deployments always contain some level of content that requires management. Like peanut butter and jelly, the ying and yang, they are inseparable. Unfortunately, unlike peanut butter and jelly content and portals usually require that an extensive amount of work be completed to create a seamless experience for end users who will be serviced by the portal, as well as for users who will be contributing and managing the content. With WebCenter Suite Oracle has understood this need and addressed it by including Universal Content Management (UCM, formerly Stellent) licensing to allow content to be delivered into the portal from a mature, class-leading content management technology. To unlock the most value from this content technology, WebCenter portal technology can leverage a series of integration strategies available through its open standards support, as well as a series of native components to enable content consumption from UCM. This have been done to enable IT teams to reduce solution deployment time and provide quick wins to their business stakeholders. The ongoing cost of ownership for the solution is also greatly reduced through these various integrations. Within this post we will explore various ways in which the content can be Contributed through out of the box interfaces Displayed natively within the portal (configuration) Exposed programmatically (development) The information below showcases how to quickly take advantage of WebCenter's marriage of content and portal technologies, then leverage various programmatic integrations available with UCM.

    Read the article

  • I&rsquo;m sorry RPGs, it&rsquo;s not you, it&rsquo;s me: The birth of my game idea

    - by George Clingerman
    One of the things I’ve had to give up in order to have some development time at night is gaming. It’s something I refused to admit for years but I’ve just had to face the facts. I’m no longer a gamer. I just don’t have hours and hours of free time to pour into gaming and when I do have hours and hours of free time I want to pour them into game development. That doesn’t mean I don’t game at all! I play games pretty much every day. It just means I’ve moved more into the casual game realm. It’s all I have time for when juggling priorities in my life. That means that games like Gears of War 2 sit shrink wrapped on my shelf and although I popped Dragon Age into my Xbox 360 one time, I barely made it through the opening sequence and haven’t had time to sit down and play again. Instead I’m playing short games like Jamestown, Atom Zombie Smasher, Fortix or if I have time to jump in and play a few rounds maybe some Monday Night Combat or Team Fortress 2. These are games I can instantly get into and play for just a short period of time and then walk away. Breath of Death VII saved my life: Back in the day (way, way back in the day) I used to be a pretty big RPG fan. Not big by a lot of RPG gamers' standards (most of the RPGs RPG fans about I’ve never heard of) but I used to LOVE to play them on the NES, SNES and Genesis and considered that my genre. Final Fantasy, Shining in the Darkness, Bard’s Tale, Faxanadu, Shadowrun, Ultima, Dragon Warrior, Chrono Trigger, Phantasy Star, Shining Force and well the list could go on but those are the ones I remember off the top of my head. I loved playing RPGs and they were my games of choice. After my first son was born (this was just about 12 years ago), I tried to continue playing RPGs and purchased games like Baldur’s Gate I & II, Neverwinter Nights, Fable, then a few of the Final Fantasy’s then Kingdom Hearts. I kept buying these games and then only playing for about fifteen minutes and never getting back to them. I still loved RPGs but they just no longer fit into my life (I still haven’t accepted that since I still purchased Dragon Age II for some reason and convinced myself I’d find the time). Adding three more sons to the mix (that’s 4 total) didn’t help much to finding more RPG time (except for Breath of Death VII and other XBLIG RPG titles, thanks guys!) All work and no RPG: A few months ago as I was sitting thinking about the lack of RPGs in my life and talking to my wife about why I wish RPGs were different and easier for a dad like me to get into. She seemed like she was listening, so I started listing all the things that made them impossible for me to play. Here’s a short list I came up with. They take 15 billion hours to complete I have a few minutes at a time I can grab to play them if I want to have time to code. At that rate it would take me 9 trillion years to beat just one RPG. There’s such long spans of times between when I can play them I forget what I was even doing so I have to spend most of the playtime I have just figuring that out and then my play time is over. Repeat. I’ll never finish one and since it takes so long to get to the fun part in an RPG, I’m never having fun. RPGs aren’t fun if you don’t have hours to play them at a time. As you can see based on my science and math, RPGs aren’t fun for me any more. From there my brain started toying around with ideas of RPGs that would work for me. They would have to be a short RPG, you know one you could beat in a single play session. A dad sized play session. I started thinking, wouldn’t it be awesome if there was a fifteen minute RPG? That got me laughing and I took that as a good sign that it sounded fun and so I thought about it a little more. I immediately discarded the idea of doing a real RPG. I’m sure a short RPG like that could be done but it wasn’t the vibe that I had in my head. No this was going to be something that just had the core essence of an RPG. In reality what I’d be making would be more of an arcade style game. One with high scores and lots of crazy action on the screen. And that’s when it hit me. It would be a speed run RPG. That’s the basics of the game I’m working on.   The Elevator Pitch: It’s a 2D top down RPG themed arcade game focused on speed. It sounds like an RPG, smells like an RPG but it’s merely emulating an RPG. The game is focused on fun and mayhem in RPG form with players leveling up in seconds instead of hours and rushing to finish quests as quickly as possible because they’ve only got fifteen minutes before EVIL overtakes the world. If the player takes longer than fifteen minutes, it’s game over man. One to four player co-operative play to really see just how fast players can level up and beat the game. Gamers will compete on leaderboards for bragging rights for fastest 1, 2, 3, and 4 player speed runs, lowest leveled characters to beat the game, highest leveled characters to beat the game and so on. Times will be tracked for everything from how long a player sat distributing stats, equipping items, talking to NPCs to running around the level. These stats will be shown at the end of each quest/level so the players can work on improving their speed run for that part of the game next time around. It’s the perfect RPG for those of us who only have fifteen minutes of game time! Where I’m at: I’m still at the prototyping stage attempting to but all the basic framework pieces in place that will at minimum give me one level to rush through. I’ve been working on this prototype for about a month now though so I’m going to have to step it up a bit or I’m not going to get finished in time (remember I’ve only got 85 days left!) Lots of the game code is in place (although pretty sloppy) but I still can’t play through that first quest/level just yet. That’s my goal to finish up by the end of next Sunday (3/25/2012). You can all hold me to that and cheer me on or heckle me throughout the week. Either way that should help me stay a bit more motivated and focused. In my head this feels like it’s going to be a fun game so I’m looking forward to seeing how it actually plays!

    Read the article

  • An Open Letter from Lyle Ekdahl, Group Vice President and General Manager, Oracle's JD Edwards

    - by Brian Dayton
    From Lyle Ekdahl, Group Vice President and General Manager, Oracle's JD Edwards As you may have heard, we recently announced some changes to the way Oracle will offer licensing of technology products with JD Edwards EnterpriseOne. Specifically, we have withdrawn from new sales the product known as JD Edwards EnterpriseOne Technology Foundation ("Blue Stack"). Our motivation for this change is simply to streamline licensing for our customers. Going forward, customers will license Oracle products from Oracle and IBM products from IBM. Customers who are currently licensed for Technology Foundation will continue to receive support--unchanged--through September 30, 2016. This announcement affects how customers license these IBM products; it does not affect Oracle's certification roadmap for IBM products with JD Edwards EnterpriseOne. Customers who are currently running their JD Edwards EnterpriseOne infrastructure using IBM platform components can continue to do so regardless of whether they license these components via Technology Foundation or directly from IBM. New customers choosing to run JD Edwards EnterpriseOne on IBM technology should license JD Edwards EnterpriseOne Core Tools from Oracle while licensing Infrastructure and any licenses of IBM products from IBM. For more information about this announcement, customers should refer to My Oracle Support article 1232453.1 Questions included in the "Frequently Asked Questions" document on My Oracle Support: Is Oracle dropping support for IBM DB2 and IBM WebSphere with JD Edwards EnterpriseOne? No. This announcement affects how customers license these IBM products; it does not affect Oracle's certification roadmap for these products. The JD Edwards EnterpriseOne matrix of supported databases, web servers, and portals remains unchanged, including planned support for IBM DB2, IBM WebSphere Application Server, and IBM WebSphere Portal. Customers who are currently running their JD Edwards EnterpriseOne infrastructure using IBM platform components can continue to do so regardless of whether they license these components via Technology Foundation or directly from IBM. As always, the timing and versions of such third-party certifications remain at Oracle's discretion. Does this announcement mean that Oracle is withdrawing support for JD Edwards EnterpriseOne on the IBM i platform? Absolutely not. JD Edwards EnterpriseOne support on the IBM i platform remains unchanged. This announcement simply states that customers will acquire Oracle products from Oracle and IBM products from IBM. In fact, as evidenced by the recent "IBM i Solution Edition for JD Edwards" offering, IBM and the JD Edwards product teams continue to innovate and offer attractive, cost-competitive solutions to the ERP marketplace. For more information about this offering see: http://www-03.ibm.com/systems/i/advantages/oracle/. I hope this clarifies any concerns. Let me know if you have any additional questions or concerns. -Lyle

    Read the article

  • How TiVo is messing up customer support.

    - by James Fleming
    Ok,  So I've gotten a TiVo and overall, I'm happy, but there have been issues and I suspect I've a defective unit. - Now the nice folks after many service calls were happy to swap it out, and to ensure continuity of service, they sent me a new unit (after a $109 deposit).  That was yesterday. Today, when we go to watch a little TV, and wait for our replacement unit to arrive we find our TiVo service has been suspended. WTF? They have an exchange program, but your unit your waiting to exchange is as dead as a doornail until the replacement arrives. How hard is it to keep the old unit active for an extra week? Here is the exchange w/Tivo below... You are currently number 1 in the queue. We apologize for the delay. We will assign you to an agent as soon as one is available.The average amount of time a customer has to wait is 00:13.  Kaylene (Listening)  Kaylene: Thank you for contacting TiVo! My name is Kaylene. So that I may better assist you, are you an existing customer?  james Fleming: yes I am, but I'm now having second thoughts about being one    Kaylene: Thank you for verifying your information. How may I assist you today James?  james Fleming: I've been having issues w/a tivo box & I'm getting a replacement sent out to me (after paying an additional deposit) and now my current unit is no longer activated  Kaylene: I can help you today!  Kaylene: When we process an exchange we do transfer over the service to the replacement box so it is active and ready to go when you receive it.  james Fleming: which is to say you also make my current box worthless until such time I receive a new box?!?!?  Kaylene: I apologize that your original box was deactivated so we could activate your replacement box.  james Fleming: Why on Earth would I bother to pay in advance for a new box if you were going to kill my existing box.  Kaylene: What features are you needing to use on your current box?  james Fleming: I need to be able to access my netflix subscription (if I'm lucky enough to have it work without rebooting)  Kaylene: Can I have you verify the TiVo Service Number of your TiVo box please?  james Fleming: 7460011906979b4  Kaylene: We have your current box temporary service but not all features are available with temporary service as it is not paid for service.  Kaylene: If you like I can transfer your service back to your current box for now. Then once you receive the new box you will have to call in and have the service transferred back to the new box.  james Fleming: Not paid for? Let's see> one tivo box + 3 year service plan + monthly service + $109 deposit on a second box = what?  Kaylene: Would you like me to transfer your service back to your current box?  james Fleming: Yes - that would be helpful  Kaylene: All you will need to do is contact us again once you receive the new box so we can transfer it back.  Kaylene: I have put your service back on TiVo box 7460011906979b4.  james Fleming: What would also be helpful is your firm informing me to how you'd be cutting service in the interim.  james Fleming: Again - I opted to pay to have a second box delivered BEFORE returning the box I have - thus trying to have a continuity of service..  Kaylene: This is not something we normally do so it is important when you contact us to transfer the service back to the new box when you receive it that you reference this case number: 110622-006089.  Kaylene: I apologize about the inconvenience. You may need  force a few connections for the box to recognize the service again.  james Fleming: If it's not something you normally do than WHY would you have a $109 fee and a term for the service.  james Fleming: I am not mad at you, but your company is not impressing me and I'm blogging about this experience  Kaylene: Again I apologize about the inconvenience but you should be good to go now. Is there anything else I can help you with today?  james Fleming: so I need to go through the re-actviate process or is that somethign you do  Kaylene: When you receive the new TiVo box you need to contact us so we can transfer the service to the new box for you.  james Fleming: sure  Kaylene: Is there anything else I can help you with today James?  james Fleming: Nope - please email this transcript to me  Kaylene: I apologize but we do not have the ability to e-mail you a copy of this transcript. You can view it online at  http://www.tivo.com when you sign into your account or you can copy and paste it now to save it.  Kaylene: Thank you for contacting TiVo today. Your reference number for our conversation is 110622-006089. You can save this for your records, and if necessary, provide this to a later agent to pull up what we discussed. There will be a brief satisfaction survey emailed to you. We would appreciate any feedback on your TiVo Chat Support experience today.  Kaylene: Thank you for using TiVo Chat and have a great day James! Good-bye.  Kaylene has disconnected.

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • TFS 2010 and SSL Configuration: Part 1 Certificates in Place

    - by Enrique Lima
    What is needed?  For starters, an understanding on the how to properly configure a certificate in IIS. Many people have found challenges in working with certificates in IIS 7 First thing is to get your certificate created, and then proceed to add it to IIS 7 By clicking on Server Certificates, we will get to this And we will be able to see the certificate or certificates installed. What options do we have to get certificates? They can be generated “in-house” or purchase them from certification authorities.  If it is “in-house”, you will those options in the Server Certificates area of IIS Manager.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >