Search Results

Search found 19278 results on 772 pages for 'enum support'.

Page 604/772 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • A Complete Customer Experience Solution (3 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development In my previous post, I talked about taking three concrete steps to improve your customers' overall experiences: 1) understand your customer, 2) empower your ecosystem, and 3) adapt your business. To do these effectively and efficiently, it's important to find the right technology that can bridge the gaps across your channels, interactions, departments, and repositories. Oracle has spent the past three years and more than six billion dollars acquiring and developing some of the world's best-of-breed applications. The result is the most comprehensive customer experience (CX) portfolio offering in the World - bar none: ATG Best in Class Selling Experiences Fatwire Best in Class Marketing Experiences Inquira Best in Class Support Experiences Endecca Best in Class Search Experiences RightNow Best in Class Service Experiences Vitrue & Involver Best in Class Social Marketing Collective Intellect Best In Class Social Listening We don't expect organizations to eat the CX elephant in one bite, nor should they try to. There are key strategic initiatives within each of the four main pillars of our customer experience offering for which we deliver solutions: 1. Customer Experience for Marketing Social Listening and Engagement Social Marketing Marketing Websites Demand Generation and Lead Management Marketing and Loyalty Management 2. Customer Experience for Commerce Search, Navigation & Content Delivery Cross-Channel Commerce Targeting & Product Recommendations Social Commerce Order Management & Fulfillment Retail Store Operations 3. Customer Experience for Sales Sales Force Automation Social Selling Territory & Quota Management Revenue Forecasting Partner Relationship Management Quote to Cash Incentive Compensation 4. Customer Experience for Service Cross-Channel Customer Service Knowledge Management Social Customer Service Eligibility Management Contracts, Assets, and Entitlements Industry-Specific Solutions eBilling Oracle's customer experience portfolio is socially infused at each layer of our pillars rather than simply bolted on as a side process. This combines with the power of the Cloud to run the parts of the solution that need the access, efficiency, and agility from a managed infrastructure. You can get the compliance control from on-premise backbone infrastructure systems that run your business and don't change that often. Please take advantage of our teams of Oracle customer experience professionals and our key agency and technology partner ecosystem. They can help you develop strategic solution roadmaps that build and deliver customer experience and that are tailored to your business needs and objectives. No one has built a better customer service portfolio to manage the entire customer journey than Oracle. It is backed by CX thought leadership programs, a commitment from our executives, and a worldview that your technology decisions must be driven by your customer experiences to succeed. If you’d like to follow up on this conversation, please leave a comment or contact me at [email protected]. You can get more information on Oracle’s complete customer experience solution here.

    Read the article

  • Wo finde ich was im OPN?

    - by A&C Redaktion
    Oracle Partner haben Zugriff auf verschiedenste Tools, Ressourcen und Services, die die tägliche Arbeit erleichtern und einen signifikanten Wettbewerbsvorteil bieten. Für unsere neuen, und vielleicht auch manchen altgedienten Partner, hier ein kleiner Wegweiser zu den wichtigsten Angeboten. Welche Ressourcen kann ich mit welchem Level der Spezialisierung nutzen?Einen englischsprachigen Überblick über alle Angebote aus den Bereichen Enablement, Development, Marketing, Sales und Support finden Sie hier unter „OPN Benefits Table Details“. Wo kann ich mich über bestimmte Oracle Produkte informieren und weiterbilden?Die Knowledge Zones sind lösungsorientierte Webseiten für den Einstieg in die Spezialisierung. Sie finden dort detaillierte Informationen zu Entwicklung, Verkauf und Implementierung von Oracle Lösungen – aufgeschlüsselt nach den Themen Datenbank, Middleware, Anwendungen, Server- und Speichersysteme sowie nach Branchen. Je nach Interesse und Spezialisierung können Sie hier bestimmten Knowledge Zones beitreten. Wie können Kunden mich und meine Leistungen als Oracle Partner finden und Kontakt aufnehmen?Dafür gibt es den Solutions Catalog: Diese Plattform gehört zu den wichtigsten Tools, um Kunden an den für sie idealen Oracle Partner zu vermitteln. Jeder spezialisierte Partner weltweit hat im Solutions Catalog ein suchmaschinenoptimiertes Profil, das er über das OPN selbst pflegt und ausbaut. Kunden filtern das Angebot nach Region und gewünschter Lösung und nehmen direkt Kontakt auf. Besuche auf der Webseite werden evaluiert und können zur individuellen Lead-Generierung genutzt werden. Wie kann ich meine Oracle Spezialisierung nutzen, um neue Kunden zu gewinnen?Im Marketing-Bereich des OPN-Portals finden Sie diverse Möglichkeiten der Werbung und Demand Generation. Einige Beispiele: Die deutschsprachigen Marketing Kits bieten Werbematerial, Templates, Schulungsmaterial und Anleitungen für das Marketing der Partner. Sie helfen dabei, eigene Kampagnen, z.B. Mailings oder Telemarketing zu einzelnen Themen, wie etwa aktuell Exadata, durchzuführen und die Demand Generation voranzutreiben. Mit den Partner Logos können Sie auf Ihrer eigenen Webseite damit werben, dass und wie intensiv Sie mit Oracle zusammenarbeiten. Es gibt Logos für jedes Partner Level sowie für jede einzelne Zertifizierung aus dem Oracle Universum. Der Partner Event Publishing Service hilft dabei, Ihre Veranstaltungen global und öffentlichkeitswirksam auf der Oracle Webseite zu präsentieren. So funktioniert's: Einfach das Excel-Formular downloaden, in deutsch oder englisch ausfüllen und mit Ihrem Logo an das Event Publishing Team senden. Ihre Event-Seite wird erstellt und ist auf dem Eventportal von Oracle suchbar. Sie erhalten für Ihre Prmotion den Link und schon haben sich einen neuen Kreis potenzieller Teilnehmer erschlossen.

    Read the article

  • Wo finde ich was im OPN?

    - by A&C Redaktion
    Oracle Partner haben Zugriff auf verschiedenste Tools, Ressourcen und Services, die die tägliche Arbeit erleichtern und einen signifikanten Wettbewerbsvorteil bieten. Für unsere neuen, und vielleicht auch manchen altgedienten Partner, hier ein kleiner Wegweiser zu den wichtigsten Angeboten. Welche Ressourcen kann ich mit welchem Level der Spezialisierung nutzen?Einen englischsprachigen Überblick über alle Angebote aus den Bereichen Enablement, Development, Marketing, Sales und Support finden Sie hier unter „OPN Benefits Table Details“. Wo kann ich mich über bestimmte Oracle Produkte informieren und weiterbilden?Die Knowledge Zones sind lösungsorientierte Webseiten für den Einstieg in die Spezialisierung. Sie finden dort detaillierte Informationen zu Entwicklung, Verkauf und Implementierung von Oracle Lösungen – aufgeschlüsselt nach den Themen Datenbank, Middleware, Anwendungen, Server- und Speichersysteme sowie nach Branchen. Je nach Interesse und Spezialisierung können Sie hier bestimmten Knowledge Zones beitreten. Wie können Kunden mich und meine Leistungen als Oracle Partner finden und Kontakt aufnehmen?Dafür gibt es den Solutions Catalog: Diese Plattform gehört zu den wichtigsten Tools, um Kunden an den für sie idealen Oracle Partner zu vermitteln. Jeder spezialisierte Partner weltweit hat im Solutions Catalog ein suchmaschinenoptimiertes Profil, das er über das OPN selbst pflegt und ausbaut. Kunden filtern das Angebot nach Region und gewünschter Lösung und nehmen direkt Kontakt auf. Besuche auf der Webseite werden evaluiert und können zur individuellen Lead-Generierung genutzt werden. Wie kann ich meine Oracle Spezialisierung nutzen, um neue Kunden zu gewinnen?Im Marketing-Bereich des OPN-Portals finden Sie diverse Möglichkeiten der Werbung und Demand Generation. Einige Beispiele: Die deutschsprachigen Marketing Kits bieten Werbematerial, Templates, Schulungsmaterial und Anleitungen für das Marketing der Partner. Sie helfen dabei, eigene Kampagnen, z.B. Mailings oder Telemarketing zu einzelnen Themen, wie etwa aktuell Exadata, durchzuführen und die Demand Generation voranzutreiben. Mit den Partner Logos können Sie auf Ihrer eigenen Webseite damit werben, dass und wie intensiv Sie mit Oracle zusammenarbeiten. Es gibt Logos für jedes Partner Level sowie für jede einzelne Zertifizierung aus dem Oracle Universum. Der Partner Event Publishing Service hilft dabei, Ihre Veranstaltungen global und öffentlichkeitswirksam auf der Oracle Webseite zu präsentieren. So funktioniert's: Einfach das Excel-Formular downloaden, in deutsch oder englisch ausfüllen und mit Ihrem Logo an das Event Publishing Team senden. Ihre Event-Seite wird erstellt und ist auf dem Eventportal von Oracle suchbar. Sie erhalten für Ihre Prmotion den Link und schon haben sich einen neuen Kreis potenzieller Teilnehmer erschlossen.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • ZTE USB Modem AC2736, connection not possible in Ubuntu 12.04.1 LTS

    - by Fredo
    It's a long post but nearly covers all my experiments and changes I did to my NM. Hope the information is complete and if there are still question, more information can be provided. I've a ZTE AC2736 USB Modem (CDMA modem) which worked fine in Ubuntu 10.04 /11.10. I recently switched to 12.04.1 (precise pangolin). after the switch the first issue I faced was to connect to internet using my USB modem (ISP: Reliance Brand: Netconnect). Tried to run the drivers provided by Reliance but they are old and won't support Kernel 2.6.30 above. since the code was not downloaded with ISO image (of 12.04) i couldn't compile the files provided in such driver. lsusb does detect it as Modem with output similar to 19d2:fff1 ZTE CDMA technologies inc. (or similar as i didn't note it down) If it is detected as USB storage it shows 19d2:fff5 (as per few online forums, i may be wrong here). I used the network Manager and configured the modem to dial #777 (default) and the ISP provided username:password combination. It tries to connect to internet (3-4 times automatically)but fails to get online. once I was able to connect in the monring hours and the message was flashed 'registered on CDMA home network'. I was able to run an update. the kernel was updated with 3.0.2 -pae OR something similar (can find out if required). I surfed the net for about 2 hours later before restarting. After the restart, again the Modem was not able to get me online. I kept trying for many times. I tried changing the setting in NM. One evening after dark I was able to connect to network with same message flash 'registered on CDMA home network' (the message was similar, i'm not precise here,sorry). I was able to surf the net for nearly 3 hours before I switched off my Laptop. I'm not able to get online after that day, It's been 3 days now. I'll try the observered theory of late/early hours sometime soon as mentioned below. Laptop configuration : Make: Lenovo Model: B480 Processor: Intel B950 RAM: 4G DDR3 HDD: 500 G Broadcom Wireless/Bluetooth/Ethernet LAN OS: FreeDOS / Ubuntu 12.04 LTS (dualboot) Kernel 3.0.2-pae Obeservation : I'm able to connect to internet in those hours when generally the speed is high (low usage by other network (wireless) users). like in early mornings or late nights. This is strange as connection should not be dependent on bandwidth usage. Any help would be appraciated to fix this issue. before I decide to cahnge the OS or ISP.

    Read the article

  • ATG Live Webcast: Planning Your Oracle E-Business Suite Upgrade from 11i to 12.1 and Beyond

    - by BillSawyer
    I am pleased to announce the next ATG Live Webcast event on December 1, 2011. Planning Your Oracle E-Business Suite Upgrade from 11i to 12.1 and Beyond Are you still on 11i and wondering about your next steps in the E-Business Suite lifecycle? Are you wondering what the upgrade considerations are going to be for 12.2? Do you want to know the best practices for upgrading E-Business Suite regardless of your version? If so, this is the webcast for you. Join Anne Carlson, Senior Director, Oracle E-Business Suite Product Strategy for this one-hour webcast with Q&A. This session will give you a framework to make informed upgrade decisions for your E-Business Suite environment. This event is targeted to functional managers, EBS project planners, and implementers. The agenda for the Planning Your Oracle E-Business Suite Upgrade from 11i to 12.1 and Beyond includes the following topics: Business Value of the Upgrade Starting Your Upgrade Project Planning Your Upgrade Approach Preparing for Your Upgrade Execution Extended Support for 11i Additional Resources Date:            Thursday, December 1, 2011Time:           8:00 AM - 9:00 AM Pacific Standard TimePresenter:  Anne Carlson, Senior Director, Oracle E-Business Suite Product StrategyWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              98515To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  271378459 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training at http://blogs.oracle.com/stevenChan/entry/e_business_suite_technology_learningIf you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Framework 4 Features: User Propogation to the Database

    - by Anthony Shorten
    Once of the features I mentioned in a previous entry was the ability for Oracle Utilities Application Framework V4 to automatically propogate the end user to the database connection. This bears more explanation. In the past releases of the Oracle Utilities Application Framework, all database connections are pooled and shared within a channel of access. So for example, the online connections on the Business Application Server share a common pool of connections and the batch in a thread pool shares a seperate pool of connections. The connections are pooled for performance reasons (the most expensive part of a typical transaction is opening and closing connections so we save time by having them ready beforehand). The idea is that when a business function needs some SQL to be execute it takes a spare connection from the pool, executes the SQL and then returns the connection back to the pool for reuse. Unfortunelty to support the pool being started and ready before the transactions arrives means that you need to have a shared userid (as you dont know the users who need them beforehand). Therefore each connection uses the same database user to execute the SQL it needs. This is acceptable for executing transactions, generally but does not allow the DBA or other tools to ascertain which end user is actually running the transaction. In Oracle Utilities Application Framework V4, we now set the CLIENT_IDENTIFIER to the end userid (not the Login Id) when the connection is taken from the pool and used and reset it back to blank when returned to the pool. The CLIENT_IDENTIFIER is a feature that is present in the Oracle Database connection information. From a monitoring perspective, when a connection to the database is actively running SQL, the end user is now able to be determined by querying the CLIENT_IDENTIFIER on the session object within the database. This can be done in the DBA's favorite monitoring tool (even just some SQL on the v$session table is enough). This has other implications as well. Oracle sells a lot of other security addons to the database and so do third parties. If a site wants to have additional levels of security or auditing in the database then the CLIENT_IDENTIFIER, if supported, is now available to be recorded or used by those products to provide additional levels of security. This facility was one of the highly "nice to haves" that customers would ask us about so we now allow it to be used to allow finer grained monitoring and additional security facilities. Note: This facility is only available for customers using the Oracle Database versions of our products.

    Read the article

  • Should I be looking for an alternative to Zen Cart as my business grows?

    - by MarkS
    I created a business website for a family business which is growing. It's my family, and I'm a software developer, but I don't want to rebuild the wheels or be a shopping cart programmer. For this business, I need the web store to "just work", but... it gets complicated... There are two parts of this business website. One of them is driven by Wordpress and I use the awesome Thesis theme. This is modern, flexible, and saves me a lot of time from doing custom coding and styling. I couldn't be more pleased with this arrangement. The other part of the site is a Zen Cart store. It's administration and it's flexibility is frustrating and archaic Web 1.0. For the past few years, I keep hearing that the developers are working on a 2.0 version of Zen Cart, but they haven't communicated anything significant in the past few years other than to say, "When it's ready, we'll let you know." What I'm looking for in a cart, I would need to install 6-10 additional mods, and would need to do a lot of custom coding. I'm now willing to pay for a top-notch e-commerce solution for a small business that we can grow up into a larger business over time. Requirements: Extremely flexible shipping that let's us set up rules per product/category, tables of rates, calculated rates, max package weighs, etc. (flexibility like that available with CEON Advance Shipping Module for Zen Cart Coupons and gift certificates Manual order entry for phone orders Multi-channel support (We also sell on Amazon, eBay, use Google Base and we want to maintain one set of inventory and have it kept current) Decent SEO features Reviews and star-ratings on products Easy social networking features for sharing, following, liking, etc) Easy integration with AdWords and analytics tracking Modern and very usable product and store administration (Like I was saying, I'm spoiled by Wordpress and Thesis) At the end of the day, I don't care if it's a hosted solution or if I have to host it myself. I just want something that is going to stay up-to-date, regularly be maintained and improved, and if I have to update it, things like the one-click update present in Wordpress is something it has to have. Professional Webmasters, if you had to run a store / website, but you had to spend your time focusing on your sales and marketing efforts rather than diffing php files and copying and tweaking them to change even the slightest details of your site, what would you choose?

    Read the article

  • Convert DVDs and ISO Files to MKV with MakeMKV

    - by DigitalGeekery
    Looking for a quick and easy way to convert your DVDs or ISOs to MKV files? Today we take a look at the MakeMKV Beta which gets the job done very well. Installing and Using MakeMKV Download and install MakeMKV (See download link below) If converting a DVD, place it into your optical drive. When you open MakeMKV you will be greeted by it’s minimalistic interface. Click on the DVD to hard drive button to open the DVD, or the folder icon on the top menu to browse for an ISO file.   MakeMKV will open the disc or file. Once the disc or file is opened, you’ll see the titles listed in the window on the left. Double-click on the titles to expand the tree structure.   Remove any title or tracks you don’t want to convert by unselecting the check box to the left. On the right side of the window, click the folder icon to select browse for your file output directory. When ready, click the MakeMkv button to begin the conversion process.   Conversion will proceed.   When the conversion is finished. Click OK. That’s all there is to it! Your MKV file is ready to play. Conclusion MakeMKV is currently still in beta and during the beta phase it will rip both DVD and Blu-ray for free. However, the DVD ripping functionality will always remain free. After 30 days if you want to continue ripping Blu-ray discs, you’ll need to purchase a license. DVD rips are very quick…typically around 15-20 minutes depending on the length of the movie. MakeMKV is available for Windows, Mac, Linux and will rip and convert DVDs to MKV files. Not all media players natively support MKV playback, so if you’re having trouble playing MKV files, try downloading VLC Media player, or the latest version of the DivX codec. Download MakeMKV Similar Articles Productive Geek Tips How To Rip DVDs with VLCEasily Change Audio File Formats with XRECODEHow To Convert Video Files to MP3 with VLCConvert PDF Files to Word Documents and Other FormatsConvert DVD to MP4 / H.264 with HD Decrypter and Handbrake TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Running Teamsite User Admin tool IWUSERADM.exe from ASP.NET

    - by Narendra Tiwari
    It has really been a head scratching task for me. I 've tried many options but nothing worked. Finally I found a workaround on google to achive this by TaskScheduler. PROBLEM When we run Teamsite user administration command line tool IWUSERADM.exe though ASP.Net it gives following error: Application popup: cmd.exe - Application Error : The application failed to initialize properly (0xc0000142). Click on OK to terminate the application. CAUSE No specific cause, it seems to be a bug, supposed to be resolved with this Microsoft patch http://support.microsoft.com/kb/960266. and there is nothing related to permission issue, y web application is impersonated with an administrator account. off course running a bat file from dmin account is a potential secury threat but for this scenario lets conifned our discussion to run the command line tool. RESOLUTION I have not tried this patch as I have not permitted to run this patch on server. Below are the steps to achive the requirement. 1/ Create a batch file which runs the IWUSERADM.exe.         echo - Add Teamsite User    CD E:\Appli\GN00\iw-home\bin    iwuseradm add-user %1 2/ Temporarily create a schedule task and run  the .bat file by scheduled task by ASP.Net code using TaskScheduler http://www.codeproject.com/KB/cs/tsnewlib.aspx. 3/ Here is the function: private int AddTeamsiteUser(string strBatchFilePath, string strUser) { //Get a ScheduledTasks object for the local computer. ScheduledTasks st = new ScheduledTasks(); // Create a task Task t; try{ t = st.CreateTask("~AddTeamsiteUser"); } catch { throw new Exception("Schedule Task ~AddTeamsiteUser already exist."); }    t.SetAccountInformation(yourLogin, yourPassword); //Set the account under which the task should run.  t.Save();  t.Run(); Thread.Sleep(2000); //for sync issue //Remove the scheduled task st.DeleteTask("~AddTeamsiteUser"); return t.ExitCode;   Below are few resources related to the above scenario:- - Task Scheduler Class Library for .NET  http://www.codeproject.com/KB/cs/tsnewlib.aspx - Run a .BAT file from ASP.NET  http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx - TaskScheduler Class  http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx - Application Hangs whle running iwuseradm.exe through ASP.Net  http://bytes.com/topic/asp-net/answers/733098-system-diagnostics-process-hangs     t.ApplicationName = strBatchFilePath; t.Parameters = strUser; t.Comment = "Adding user to Teamsite Application"

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Mark Zuckerberg tops the list of 50 Highest Rated CEOs. 3 Indian CEOs feature in the list.

    - by Gopinath
    Mark Zuckerberg, the CEO of Facebook is rated as the best CEO according to a report released by the popular employee reviews website Glassdoor.com. 50,000 employees reviews submitted to Glassdoor in the past 1 year are considered for preparing the rating list and Zukerberg topped the list with 99 percent approval to the question “Do you approve of the way your CEO is leading the company?”. Wow! That’s an amazing support to Zukergerg from his employees though stock market and share holders are not with him. Coincidently Facebook is also rated as the best company to work by Glassdoor in a recent survey. Here is the list of top 10 CEOs Mark Zuckerberg, Facebook; 99.3% Approval Bill McDermott & Jim Hagemann Snabe, SAP; 99% Approval Dominic Barton, McKinsey & Company; 97% Approval Jim Turley, Ernst & Young; 96% Approval John E. Schlifske, Northwestern Mutual; 96% Approval Frank D’Souza, Cognizant Technology Solutions; 96% Approval Joe Tucci, EMC; 96% Approval Paul E. Jacobs, QUALCOMM; 95% Approval Richard K. Davis, U.S. Bank; 95% Approval Pierre Nanterme, Accenture; 95% Approval 3 Indian CEOs in the top 50 list – TCS, Wipro & MindTree The list featured three Indian CEOs and all the three are leading Software IT Services organizations in India and creating thousands of IT jobs.  Natarajan Chandrasekaran – the CEO of TCS is at 25th position, Krishnakumar Natarajan – the CEO of MindTree is at 28th position and  Wipro’s T.K.Kurien is at 44th position. Glad to see Indian CEO joining the global ranks. Tech Heavy Weights Google, Apple, Amazon & Microsoft aren’t in top 10 Another thing to note from this report is that the CEO’s of technology heavy weights Google, Apple, Amazon and Microsoft are not in the top10 list- looks like their employees are not really happy with their bosses. At least not as happy as their peers at Facebook. Google CEO’s Larry Page is at 11th position, Jeff Bezos of Amazon at 16th position and Tim Cook of Apple is at 18th position. Well the Microsoft CEO is not even in the list of top 50!! You can read the complete list of ratings at Glassdoor.com’s blog. Photo Credit: Andrew Feinberg

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • Why lock statements don't scale

    - by Alex.Davies
    We are going to have to stop using lock statements one day. Just like we had to stop using goto statements. The problem is similar, they're pretty easy to follow in small programs, but code with locks isn't composable. That means that small pieces of program that work in isolation can't necessarily be put together and work together. Of course actors scale fine :) Why lock statements don't scale as software gets bigger Deadlocks. You have a program with lots of threads picking up lots of locks. You already know that if two of your threads both try to pick up a lock that the other already has, they will deadlock. Your program will come to a grinding halt, and there will be fire and brimstone. "Easy!" you say, "Just make sure all the threads pick up the locks in the same order." Yes, that works. But you've broken composability. Now, to add a new lock to your code, you have to consider all the other locks already in your code and check that they are taken in the right order. Algorithm buffs will have noticed this approach means it takes quadratic time to write a program. That's bad. Why lock statements don't scale as hardware gets bigger Memory bus contention There's another headache, one that most programmers don't usually need to think about, but is going to bite us in a big way in a few years. Locking needs exclusive use of the entire system's memory bus while taking out the lock. That's not too bad for a single or dual-core system, but already for quad-core systems it's a pretty large overhead. Have a look at this blog about the .NET 4 ThreadPool for some numbers and a weird analogy (see the author's comment). Not too bad yet, but I'm scared my 1000 core machine of the future is going to go slower than my machine today! I don't know the answer to this problem yet. Maybe some kind of per-core work queue system with hierarchical work stealing. Definitely hardware support. But what I do know is that using locks specifically prevents any solution to this. We should be abstracting our code away from the details of locks as soon as possible, so we can swap in whatever solution arrives when it does. NAct uses locks at the moment. But my advice is that you code using actors (which do scale well as software gets bigger). And when there's a better way of implementing actors that'll scale well as hardware gets bigger, only NAct needs to work out how to use it, and your program will go fast on it's own.

    Read the article

  • Where are my sub templates?

    - by Tim Dexter
    This one is for standalone/BIEE uses of Publisher. All the ERP/CRM/HCM folks are already catered for and can tuck into a nut cutlet and arugala salad. Sorry, I have just watched Food Inc and even if only half of it is true; Im still on a crusade in my house against mass produced food. Wake up World! If you have ventured into the world of sub templating, you'll be reaping some development benefit. In terms of shared report components and calculations they are very useful. Just exporting all of your report headers and footers to a single sub template can potentially save you hours and hours of work and make you look like a star. If someone in management gets it into their head that they would like Comic San Serif font rather than Arial in their report headers, its a 10 min job rather than 100 hours! What about the rest of the report content? I hear you cry. Its coming in 11g, full master template support. Your management wants bright blue borders with yellow backgrounds for all the tables in your reports, 5 minute job! Getting back to sub templates and my comment about all the ERP/CRM/HCM folks be catered for. In the standalone release there is no out of the box directory for you to drop your sub templates. Dropping them into the main report directory would make sense but they are not accessible there via a URL. An oversight on our part and something that will be addressed in 11g. Sub templates are now a first class citizen in the world of BIP, you can upload them and BIP will know what to do with them. But what do you do right now? The easiest place to put them where BIP can 'see' them is to create a directory under the xmlpserver install directory in the J2EE container e.g. $J2EE_HOME/xmlpserver/xmlpserver/subtemplates You can call it whatever you want but when the server is started up, that directory is accessible via a URL i.e. http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf. You can therefore put it into the top of your main templates and call the sub template. <?import: http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf?> Of course, you can drop them anywhere you want, they just need to be in a web server mountable directory. Enjoy the arugala!

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • ADF @ Virtual Developer Day: Oracle Fusion Development;July 10th 2012

    - by JuergenKress
    Virtual Developer Day: Oracle Fusion Development Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications. Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect – this event has something for everyone. Don’t miss this opportunity. Tuesday, July 10, 2012 9:00 a.m. PT. – 1:00 p.m. PT 11:00 a.m. CT – 3:00 p.m. CT 12:00 p.m. ET – 4:00 p.m. ET 1:00 p.m. BRT – 5:00 p.m. BRT Agenda 9:00 a.m. Opening 9:30 a.m. Keynote: Oracle Fusion Development Track 1 Introduction to Fusion Development Track 2 What's New in Fusion Development Track 3 Fusion Development in the Enterprise 10:00 a.m. Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 a.m. Rich Web UI made simple – an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration Sessions abstracts Register online now! for this FREE event WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OTN Virtual Developer Day,ADF,WebLogic,WebLogic basic,ias upgrade,C2B2,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • WebCenter Customer Spotlight: College of American Pathologists

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary College of American Pathologists Goes Live with OracleWebCenter - Imaging, AP Invoice Automation, and EBS Managed Attachment with Support for Imaging ContentThe College of American Pathologists (CAP), the leading organization of board-certified pathologists serving more then 18,000 physician members, 7,000 laboratories are accredited by the CAP, and approximately 22,000 laboratories are enrolled in the College’s proficiency testing programs. The business objective was to content-enable their Oracle E-Business Suite (EBS) enterprise application by combining the best of Imaging and Manage Attachment functionality providing a unique opportunity for the business to have unprecedented access to both structure and unstructured content from within their enterprise application. The solution improves customer services turnaround time, provides better compliance and improves maintenance and management of the technology infrastructure. Company OverviewThe College of American Pathologists (CAP), celebrating 50 years as the gold standard in laboratory accreditation, is a medical society serving more than 17,000 physician members and the global laboratory community. It is the world’s largest association composed exclusively of board certified pathologists and is the worldwide leader in laboratory quality assurance. The College advocates accountable, high-quality, and cost-effective patient care. The more than 17,000 pathologist members of the College of American Pathologists represent board-certified pathologists and pathologists in training worldwide. More than 7,000 laboratories are accredited by the CAP, and approximately 23,000 laboratories are enrolled in the College’s proficiency testing programs.  Business ChallengesThe CAP business objective was to content-enable their Oracle E-Business Suite (EBS) enterprise application by combining the best of Imaging and Manage Attachment functionality providing a unique opportunity for the business to have unprecedented access to both structure and unstructured content from within their enterprise application.  Bring more flexibility to systems and programs in order to adapt quickly Get a 360 degree view of the customer Reduce cost of running the business Solution DeployedWith the help of Oracle Consulting, the customer implemented Oracle WebCenter Content as the centralized E-Business Suite Document Repository.  The solution enables to capture, present and manage all unstructured content (PDFs,word processing documents, scanned images, etc.) related to Oracle E-Business Suite transactions and exposing the related content using the familiar EBS User Interface. Business ResultsThe CAP achieved following benefits from the implemented solution: Managed Attachment Solution Align with strategic Oracle Fusion Middleware platform Integrate with the CAP existing data capture capabilities Single user interface provided by the Managed Attachment solution for all content Better compliance and improved collaboration  Account Payables Invoice Processing Imaging Solution Automated invoice management eliminating dependency on paper materials and improving compliance, collaboration and accuracy A single repository to house and secure scanned invoices and all supplemental documents Greater management visibility of invoice entry process Additional Information CAP OpenWorld Presentation Oracle WebCenter Content Oracle Webcenter Capture Oracle WebCenter Imaging Oracle  Consulting

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • More SharePoint 2010 Expression Builders

    - by Ricardo Peres
    Introduction Following my last post, I decided to publish the whole set of expression builders that I use with SharePoint. For all who don’t know about expression builders, they allow us to employ a declarative approach, so that we don’t have to write code for “gluing” things together, like getting a value from the query string, the page’s underlying SPListItem or the current SPContext and assigning it to a control’s property. These expression builders are for some quite common scenarios, I use them quite often, and I hope you find them useful as well. SPContextExpression This expression builder allows us to specify an expression to be processed on the SPContext.Current property object. For example: 1: <asp:Literal runat="server" Text=“<%$ SPContextExpression:Site.RootWeb.Lists[0].Author.LoginName %>”/> It is identical to having the following code: 1: String authorName = SPContext.Current.Site.RootWeb.Lists[0].Author.LoginName; SPFarmProperty Returns a property stored on the farm level: 1: <asp:Literal runat="server" Text="<%$ SPFarmProperty:SomeProperty %>"/> Identical to: 1: Object someProperty = SPFarm.Local.Properties["SomeProperty"]; SPField Returns the value of a selected page’s list item field: 1: <asp:Literal runat="server" Text="<%$ SPField:Title %>"/> Does the same as: 1: String title = SPContext.Current.ListItem["Title"] as String; SPIsInAudience Checks if the current user belongs to an audience: 1: <asp:CheckBox runat="server" Checked="<%$ SPIsInAudience:SomeAudience %>"/> Equivalent to: 1: AudienceManager audienceManager = new AudienceManager(SPServiceContext.Current); 2: Audience audience = audienceManager.Audiences["SomeAudience"]; 3: Boolean isMember = audience.IsMember(SPContext.Current.Web.User.LoginName); SPIsInGroup Checks if the current user belongs to a group: 1: <asp:CheckBox runat="server" Checked="<%$ SPIsInGroup:SomeGroup %>"/> The equivalent C# code is: 1: SPContext.Current.Web.CurrentUser.Groups.OfType<SPGroup>().Any(x => String.Equals(x.Name, “SomeGroup”, StringComparison.OrdinalIgnoreCase)); SPProperty Returns the value of a user profile property for the current user: 1: <asp:Literal runat="server" Text="<%$ SPProperty:LastName %>"/> Where the same code in C# would be: 1: UserProfileManager upm = new UserProfileManager(SPServiceContext.Current); 2: UserProfile u = upm.GetUserProfile(false); 3: Object property = u["LastName"].Value; SPQueryString Returns a value passed on the query string: 1: <asp:GridView runat="server" PageIndex="<%$ SPQueryString:PageIndex %>" /> Is equivalent to (no SharePoint code this time): 1: Int32 pageIndex = Convert.ChangeType(typeof(Int32), HttpContext.Current.Request.QueryString["PageIndex"]); SPWebProperty Returns the value of a property stored at the site level: 1: <asp:Literal runat="server" Text="<%$ SPWebProperty:__ImagesListId %>"/> You can get the same result as: 1: String imagesListId = SPContext.Current.Web.AllProperties["__ImagesListId"] as String; Code OK, let’s move to the code. First, a common abstract base class, mainly for inheriting the conversion method: 1: public abstract class SPBaseExpressionBuilder : ExpressionBuilder 2: { 3: #region Protected static methods 4: protected static Object Convert(Object value, PropertyInfo propertyInfo) 5: { 6: if (value != null) 7: { 8: if (propertyInfo.PropertyType.IsAssignableFrom(value.GetType()) == false) 9: { 10: if (propertyInfo.PropertyType.IsEnum == true) 11: { 12: value = Enum.Parse(propertyInfo.PropertyType, value.ToString(), true); 13: } 14: else if (propertyInfo.PropertyType == typeof(String)) 15: { 16: value = value.ToString(); 17: } 18: else if ((typeof(IConvertible).IsAssignableFrom(propertyInfo.PropertyType) == true) && (typeof(IConvertible).IsAssignableFrom(value.GetType()) == true)) 19: { 20: value = System.Convert.ChangeType(value, propertyInfo.PropertyType); 21: } 22: } 23: } 24:  25: return (value); 26: } 27: #endregion 28:  29: #region Public override methods 30: public override CodeExpression GetCodeExpression(BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 31: { 32: if (String.IsNullOrEmpty(entry.Expression) == true) 33: { 34: return (new CodePrimitiveExpression(String.Empty)); 35: } 36: else 37: { 38: return (new CodeMethodInvokeExpression(new CodeMethodReferenceExpression(new CodeTypeReferenceExpression(this.GetType()), "GetValue"), new CodePrimitiveExpression(entry.Expression.Trim()), new CodePropertyReferenceExpression(new CodeArgumentReferenceExpression("entry"), "PropertyInfo"))); 39: } 40: } 41: #endregion 42:  43: #region Public override properties 44: public override Boolean SupportsEvaluate 45: { 46: get 47: { 48: return (true); 49: } 50: } 51: #endregion 52: } Next, the code for each expression builder: 1: [ExpressionPrefix("SPContext")] 2: public class SPContextExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String expression, PropertyInfo propertyInfo) 6: { 7: SPContext context = SPContext.Current; 8: Object expressionValue = DataBinder.Eval(context, expression.Trim().Replace('\'', '"')); 9:  10: expressionValue = Convert(expressionValue, propertyInfo); 11:  12: return (expressionValue); 13: } 14:  15: #endregion 16:  17: #region Public override methods 18: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 19: { 20: return (GetValue(entry.Expression, entry.PropertyInfo)); 21: } 22: #endregion 23: }   1: [ExpressionPrefix("SPFarmProperty")] 2: public class SPFarmPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, PropertyInfo propertyInfo) 6: { 7: Object propertyValue = SPFarm.Local.Properties[propertyName]; 8:  9: propertyValue = Convert(propertyValue, propertyInfo); 10:  11: return (propertyValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPField")] 2: public class SPFieldExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String fieldName, PropertyInfo propertyInfo) 6: { 7: Object fieldValue = SPContext.Current.ListItem[fieldName]; 8:  9: fieldValue = Convert(fieldValue, propertyInfo); 10:  11: return (fieldValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPIsInAudience")] 2: public class SPIsInAudienceExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String audienceName, PropertyInfo info) 6: { 7: Debugger.Break(); 8: audienceName = audienceName.Trim(); 9:  10: if ((audienceName.StartsWith("'") == true) && (audienceName.EndsWith("'") == true)) 11: { 12: audienceName = audienceName.Substring(1, audienceName.Length - 2); 13: } 14:  15: AudienceManager manager = new AudienceManager(); 16: Object value = manager.IsMemberOfAudience(SPControl.GetContextWeb(HttpContext.Current).CurrentUser.LoginName, audienceName); 17:  18: if (info.PropertyType == typeof(String)) 19: { 20: value = value.ToString(); 21: } 22:  23: return(value); 24: } 25:  26: #endregion 27:  28: #region Public override methods 29: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 30: { 31: return (GetValue(entry.Expression, entry.PropertyInfo)); 32: } 33: #endregion 34: }   1: [ExpressionPrefix("SPIsInGroup")] 2: public class SPIsInGroupExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String groupName, PropertyInfo info) 6: { 7: groupName = groupName.Trim(); 8:  9: if ((groupName.StartsWith("'") == true) && (groupName.EndsWith("'") == true)) 10: { 11: groupName = groupName.Substring(1, groupName.Length - 2); 12: } 13:  14: Object value = SPControl.GetContextWeb(HttpContext.Current).CurrentUser.Groups.OfType<SPGroup>().Any(x => String.Equals(x.Name, groupName, StringComparison.OrdinalIgnoreCase)); 15:  16: if (info.PropertyType == typeof(String)) 17: { 18: value = value.ToString(); 19: } 20:  21: return(value); 22: } 23:  24: #endregion 25:  26: #region Public override methods 27: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 28: { 29: return (GetValue(entry.Expression, entry.PropertyInfo)); 30: } 31: #endregion 32: }   1: [ExpressionPrefix("SPProperty")] 2: public class SPPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, System.Reflection.PropertyInfo propertyInfo) 6: { 7: SPServiceContext serviceContext = SPServiceContext.GetContext(HttpContext.Current); 8: UserProfileManager upm = new UserProfileManager(serviceContext); 9: UserProfile up = upm.GetUserProfile(false); 10: Object propertyValue = (up[propertyName] != null) ? up[propertyName].Value : null; 11:  12: propertyValue = Convert(propertyValue, propertyInfo); 13:  14: return (propertyValue); 15: } 16:  17: #endregion 18:  19: #region Public override methods 20: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 21: { 22: return (GetValue(entry.Expression, entry.PropertyInfo)); 23: } 24: #endregion 25: }   1: [ExpressionPrefix("SPQueryString")] 2: public class SPQueryStringExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String parameterName, PropertyInfo propertyInfo) 6: { 7: Object parameterValue = HttpContext.Current.Request.QueryString[parameterName]; 8:  9: parameterValue = Convert(parameterValue, propertyInfo); 10:  11: return (parameterValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPWebProperty")] 2: public class SPWebPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, PropertyInfo propertyInfo) 6: { 7: Object propertyValue = SPContext.Current.Web.AllProperties[propertyName]; 8:  9: propertyValue = Convert(propertyValue, propertyInfo); 10:  11: return (propertyValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: } Registration You probably know how to register them, but here it goes again: add this following snippet to your Web.config file, inside the configuration/system.web/compilation/expressionBuilders section: 1: <add expressionPrefix="SPContext" type="MyNamespace.SPContextExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 2: <add expressionPrefix="SPFarmProperty" type="MyNamespace.SPFarmPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 3: <add expressionPrefix="SPField" type="MyNamespace.SPFieldExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 4: <add expressionPrefix="SPIsInAudience" type="MyNamespace.SPIsInAudienceExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 5: <add expressionPrefix="SPIsInGroup" type="MyNamespace.SPIsInGroupExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 6: <add expressionPrefix="SPProperty" type="MyNamespace.SPPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 7: <add expressionPrefix="SPQueryString" type="MyNamespace.SPQueryStringExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 8: <add expressionPrefix="SPWebProperty" type="MyNamespace.SPWebPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> I’ll leave it up to you to figure out the best way to deploy this to your server!

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >