Search Results

Search found 47740 results on 1910 pages for 'oracle database appliance general'.

Page 280/1910 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Ops Center zip documentation

    - by Owen Allen
    If you're operating in a dark site, or are otherwise without easy access to the internet, it can be tricky to get access to the docs. The readme comes along with the product, but that's not exactly the same as the whole doc library. Well, we've put a zip file with the whole doc library contents up on the main doc page. So, if you are in a site without internet access, you can get the zip, extract it, and have a portable version of the site, including the pdf and html versions of all of the docs.

    Read the article

  • DB DOC Enhancements for Oracle SQL Developer v4

    - by thatjeffsmith
    One of our more popular features is ‘DB Doc.’ It’s like JAVADOC for the database. Pick a connection, right-click, and go. It will generate an HTML documentation set for that schema. For version 4, we’ve introduced a few enhancements based on user requests. That’s right, you asked, and we listened. Added support for Package Bodies Added parallelization option for larger doc sets Enhanced the HTML formatting a bit Select Your Object Types and Generation Options We’ve changed the default selection of object types to be included and added support for package bodies There’s also an option to auto-open the documentation set after it’s been generated. And the HTML As Requested

    Read the article

  • Top 10 Reasons to Use MySQL and MySQL Cluster as an Embedded Database

    - by Rob Young
    If you are considering using MySQL and/or MySQL Cluster as the embedded database solution for your application, you should join us for today's webcast where we will discuss how you can cut costs, add flexibility and benefit from new performance and scalability enhancements that are now available in MySQL 5.6 and MySQL Cluster 7.2.  We will cover the top 10 reasons that make MySQL and MySQL Cluster the best solutions for embedding in both shrink wrapped and SaaS provided applications, how industry leaders leverage MySQL products and how you can get started with the latest innovations and support offerings across the MySQL product line. You can learn more and reserve your seat here. As always, thanks for your support of MySQL!

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Webcast Replay Available: SOA Integration Options for E-Business Suite

    - by BillSawyer
    I am pleased to release the replay and presentation for the latest ATG Live Webcast: SOA Integration Options for E-Business Suite (Presentation)Abhishek Verma, Manager, Applications Technology Group and Rajesh Ghosh, Group Manager, ATG Development discussed the web service and SOA integration options for Oracle E-Business Suite. The presentation covered Oracle's integration tools and technologies, including the Oracle Applications Adapter and the Integrated SOA Gateway.Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Do you play Sudoku ?

    - by Gilles Haro
    Did you know that 11gR2 database could solve a Sudoku puzzle with a single query and, most of the time, and this in less than a second ? The following query shows you how ! Simply pass a flattened Sudoku grid to it a get the result instantaneously ! col "Solution" format a9 col "Problem" format a9 with Iteration( initialSudoku, Step, EmptyPosition ) as ( select initialSudoku, InitialSudoku, instr( InitialSudoku, '-' )        from ( select '--64----2--7-35--1--58-----27---3--4---------4--2---96-----27--7--58-6--3----18--' InitialSudoku from dual )    union all    select initialSudoku        , substr( Step, 1, EmptyPosition - 1 ) || OneDigit || substr( Step, EmptyPosition + 1 )         , instr( Step, '-', EmptyPosition + 1 )      from Iteration         , ( select to_char( rownum ) OneDigit from dual connect by rownum <= 9 ) OneDigit     where EmptyPosition > 0       and not exists          ( select null              from ( select rownum IsPossible from dual connect by rownum <= 9 )             where OneDigit = substr( Step, trunc( ( EmptyPosition - 1 ) / 9 ) * 9 + IsPossible, 1 )   -- One line must contain the 1-9 digits                or OneDigit = substr( Step, mod( EmptyPosition - 1, 9 ) - 8 + IsPossible * 9, 1 )      -- One row must contain the 1-9 digits                or OneDigit = substr( Step, mod( trunc( ( EmptyPosition - 1 ) / 3 ), 3 ) * 3           -- One square must contain the 1-9 digits                            + trunc( ( EmptyPosition - 1 ) / 27 ) * 27 + IsPossible                            + trunc( ( IsPossible - 1 ) / 3 ) * 6 , 1 )          ) ) select initialSudoku "Problem", Step "Solution"    from Iteration  where EmptyPosition = 0 ;   The Magic thing behind this is called Recursive Subquery Factoring. The Oracle documentation gives the following definition: If a subquery_factoring_clause refers to its own query_name in the subquery that defines it, then the subquery_factoring_clause is said to be recursive. A recursive subquery_factoring_clause must contain two query blocks: the first is the anchor member and the second is the recursive member. The anchor member must appear before the recursive member, and it cannot reference query_name. The anchor member can be composed of one or more query blocks combined by the set operators: UNION ALL, UNION, INTERSECT or MINUS. The recursive member must follow the anchor member and must reference query_name exactly once. You must combine the recursive member with the anchor member using the UNION ALL set operator. This new feature is a replacement of this old Hierarchical Query feature that exists in Oracle since the days of Aladdin (well, at least, release 2 of the database in 1977). Everyone remembers the old syntax : select empno, ename, job, mgr, level      from   emp      start with mgr is null      connect by prior empno = mgr; that could/should be rewritten (but not as often as it should) as withT_Emp (empno, name, level) as        ( select empno, ename, job, mgr, level             from   emp             start with mgr is null             connect by prior empno = mgr        ) select * from   T_Emp; which uses the "with" syntax, whose main advantage is to clarify the readability of the query. Although very efficient, this syntax had the disadvantage of being a Non-Ansi Sql Syntax. Ansi-Sql version of Hierarchical Query is called Recursive Subquery Factoring. As of 11gR2, Oracle got compliant with Ansi Sql and introduced Recursive Subquery Factoring. It is basically an extension of the "With" clause that enables recursion. Now, the new syntax for the query would be with T_Emp (empno, name, job, mgr, hierlevel) as       ( select E.empno, E.ename, E.job, E.mgr, 1 from emp E where E.mgr is null         union all         select E.empno, E.ename, E.job, E.mgr, T.hierlevel + 1from emp E                                                                                                            join T_Emp T on ( E.mgr = T.empno ) ) select * from   T_Emp; The anchor member is a replacement for the "start with" The recursive member is processed through iterations. It joins the Source table (EMP) with the result from the Recursive Query itself (T_Emp) Each iteration works with the results of all its preceding iterations.     Iteration 1 works on the results of the first query     Iteration 2 works on the results of Iteration 1 and first query     Iteration 3 works on the results of Iteration 1, Iteration 2 and first query. So, knowing that, the Sudoku query it self-explaining; The anchor member contains the "Problem" : The Initial Sudoku and the Position of the first "hole" in the grid. The recursive member tries to replace the considered hole with any of the 9 digit that would satisfy the 3 rules of sudoku Recursion progress through the grid until it is complete.   Another example :  Fibonaccy Numbers :  un = (un-1) + (un-2) with Fib (u1, u2, depth) as   (select 1, 1, 1 from dual    union all    select u1+u2, u1, depth+1 from Fib where depth<10) select u1 from Fib; Conclusion Oracle brings here a new feature (which, to be honest, already existed on other concurrent systems) and extends the power of the database to new boundaries. It’s now up to developers to try and test it and find more useful application than solving puzzles… But still, solving a Sudoku in less time it takes to say it remains impressive… Interesting links: You might be interested by the following links which cover different aspects of this feature Oracle Documentation Lucas Jellema 's Blog Fibonaci Numbers

    Read the article

  • Top 5 Sites and Activities in San Francisco to Experience During Oracle OpenWorld

    - by kgee
    While Oracle OpenWorld may provide solutions and information on topics like how to simplify your IT, the importance of cloud, and what types of storage may satisfy your enterprise needs, who is going to tell you more about San Francisco? Here are some suggested sites and activities to experience after OpenWorld that aren’t too far from the Moscone Center. It is recommended to take a cab for the sake of time, but the 6 square miles that make up San Francisco will make for a quick trek to any of the following destinations: The Golden Gate BridgeAn image often associated with San Francisco, this bridge is one of the most impressive in the world. Take a walk across it, or view it from nearby Crissy Field, it is a sight that floors even the most veteran of San Franciscans. The Ferry BuildingLocated at the end of Market Street in the Embarcadero, the Ferry Building once served as a hub of water transport and trade. The building has a bay front view and an array of food choices and restaurants. It is easily accessible via the Muni, BART, trolley or by cab. It is a must-see in San Francisco, and not too far from the Moscone Center. Ride the Trolley to the CastroFor only $2, you can get go back in history for a moment on the Trolley. Take the F-line from the Embarcadero and ride it all the way to the Castro district. During the ride, you will get an overview of the landscape and cultures that are prevalent in San Francisco, but be wary that some areas may beg for an open mind more than others. Golden Gate ParkWhen you tire of the concrete jungle, the lucky part of being in San Francisco is that you can escape to a natural refuge, this park being one of the favorites. This park is known for its hiking trails, cultural attractions, monuments, lakes and gardens. It is one good reason to bring your sneakers to San Francisco, and is also a great place to picnic. Please be wary that it is easy to get lost, and it is advisable to bring a map (just in case) if you go. Haight AshburyFor a complete change of scenery, Haight Ashbury is known as one of the places hippies used to live and the location of "The Summer of Love." It is now a more affluent neighborhood with boutique shops and the occasional drum circle. While it may be perceived as grungy in certain spots, it is one of the most photographed places in San Francisco and an integral part of San Franciscan history.

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • WebSocket Samples in GlassFish 4 build 66 - javax.websocket.* package: TOTD #190

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket TOTD #189: Collaborative Whiteboard using WebSocket in GlassFish 4 The earlier blogs created a WebSocket endpoint as: import javax.net.websocket.annotations.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . Based upon the discussion in JSR 356 EG, the package names have changed to javax.websocket.*. So the updated endpoint definition will look like: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . The POM dependency is: <dependency> <groupId>javax.websocket</groupId> <artifactId>javax.websocket-api</artifactId> <version>1.0-b09</version> </dependency> And if you are using GlassFish 4 build 66, then you also need to provide a dummy EndpointFactory implementation as: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint(value="websocket", factory=MyEndpoint.DummyEndpointFactory.class)public class MyEndpoint { . . .   class DummyEndpointFactory implements EndpointFactory {    @Override public Object createEndpoint() { return null; }  }} This is only interim and will be cleaned up in subsequent builds. But I've seen couple of complaints about this already and so this deserves a short blog. Have you been tracking the latest Java EE 7 implementations in GlassFish 4 promoted builds ?

    Read the article

  • Attention Extension Developers: Your input wanted!

    - by John 'JB' Brock
    Your Input Wanted! I've posted a lot of different topics throughout 2011, and would really like to provide info that is most important to you, the extension developer, as we head for 2012. What are the most important areas that you want to learn more about? Post your requests for examples and topics in the comments section. Let me know what you are struggling with, or something that you worked out, but it took way to long to figure out.  I'll take the list and do my best to provide samples over the coming months. Please provide the version of JDeveloper that you want the topic to cover. Remember: 11gR1 = 11.1.1.x (e.g. 11.1.1.5.0) 11gR2 = 11.1.2.x (e.g. 11.1.2.1.0) Thanks in advance for your comments and suggestions.  Let's get the JDev Extension community going in 2012! --jb John "JB" BrockOracle Product Manager - JDev ESDK

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an "Enterprise 2.0" environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Is your organization a social enterprise? How are you using Web 2.0 and Enterprise 2.0 technologies? Read this white paper to learn how Oracle WebCenter Suite enables organizations to become social enterprises and is the modern user experience platform for the enterprise and the Web.

    Read the article

  • Developer Preview of Java SE 8 for ARM Now Available

    - by Tori Wieldt
    A Developer Preview of Java SE 8 including JavaFX (JDK 8) on Linux for ARM processors is now available for immediate download from Java.net. As Java Evangelist Stephen Chin says, "This is a great platform for doing small embedded projects, a low cost computing system for teaching, and great fun for hobbyists." This Developer Preview is provided to the community so that you can provide us with valuable feedback on the ongoing progress of the project. We wanted to get this release out to you as quickly as we can so you can start using this build of Java SE 8 on an ARM device, such as the Raspberry Pi (http://raspberrypi.org/). Download JDK 8 for ARM Read the documentation for this early access release Let Us Know What You Think!Use the Forums to share your stories, comments and questions. Java SE Snapshots: Project Feedback Forum  JavaFX Forum We are interested in both problems and success stories. If something does not work or behaves differently than what you expect, please check the list of known issues and if yours is not listed there, then report a bug at JIRA Bug Tracking System. More ResourcesJavaFX on Raspberry Pi – 3 Easy Steps by Stephen Chin OTN Tech Article: Getting Started with Java SE Embedded on the Raspberry Pi by Bill Courington and Gary Collins Java Magazine Article: Getting Started with Java SE for Embedded Devices on Raspberry Pi (Free subscription required) Video: Quickie Guide Getting Java Embedded Running on Raspberry Pi by Hinkmond Wong 

    Read the article

  • Accenture Launches Smart Grid Data Management Platform

    - by caroline.yu
    Accenture announced today it has launched the Accenture Intelligent Network Data Enterprise (INDE), a data management platform to help utilities design, deploy and manage smart grids. INDE's functionality can be enabled by an array of third party technologies. In addition, Accenture plans to offer utilities the option of implementing the INDE solution based on a pre-configured suite of Oracle technologies. The Oracle-based version of INDE will accelerate the design of smart grids and help reduce the costs and risks associated with smart grid implementation. Stephan Scholl, Senior Vice President and General Manager of Oracle Utilities said, "Oracle and Accenture share a common vision of how the smart grid will enable more efficient energy choices for utilities and their customers. Our combined expertise in delivering mission-critical smart grid applications, security, data management and systems integration can help accelerate utilities toward a more intelligent network now and as future needs arise." For the full press release, click here.

    Read the article

  • WebSocket Applications using Java: JSR 356 Early Draft Now Available (TOTD #183)

    - by arungupta
    WebSocket provide a full-duplex and bi-directional communication protocol over a single TCP connection. JSR 356 is defining a standard API for creating WebSocket applications in the Java EE 7 Platform. This Tip Of The Day (TOTD) will provide an introduction to WebSocket and how the JSR is evolving to support the programming model. First, a little primer on WebSocket! WebSocket is a combination of IETF RFC 6455 Protocol and W3C JavaScript API (still a Candidate Recommendation). The protocol defines an opening handshake and data transfer. The API enables Web pages to use the WebSocket protocol for two-way communication with the remote host. Unlike HTTP, there is no need to create a new TCP connection and send a chock-full of headers for every message exchange between client and server. The WebSocket protocol defines basic message framing, layered over TCP. Once the initial handshake happens using HTTP Upgrade, the client and server can send messages to each other, independent from the other. There are no pre-defined message exchange patterns of request/response or one-way between client and and server. These need to be explicitly defined over the basic protocol. The communication between client and server is pretty symmetric but there are two differences: A client initiates a connection to a server that is listening for a WebSocket request. A client connects to one server using a URI. A server may listen to requests from multiple clients on the same URI. Other than these two difference, the client and server behave symmetrically after the opening handshake. In that sense, they are considered as "peers". After a successful handshake, clients and servers transfer data back and forth in conceptual units referred as "messages". On the wire, a message is composed of one or more frames. Application frames carry payload intended for the application and can be text or binary data. Control frames carry data intended for protocol-level signaling. Now lets talk about the JSR! The Java API for WebSocket is worked upon as JSR 356 in the Java Community Process. This will define a standard API for building WebSocket applications. This JSR will provide support for: Creating WebSocket Java components to handle bi-directional WebSocket conversations Initiating and intercepting WebSocket events Creation and consumption of WebSocket text and binary messages The ability to define WebSocket protocols and content models for an application Configuration and management of WebSocket sessions, like timeouts, retries, cookies, connection pooling Specification of how WebSocket application will work within the Java EE security model Tyrus is the Reference Implementation for JSR 356 and is already integrated in GlassFish 4.0 Promoted Builds. And finally some code! The API allows to create WebSocket endpoints using annotations and interface. This TOTD will show a simple sample using annotations. A subsequent blog will show more advanced samples. A POJO can be converted to a WebSocket endpoint by specifying @WebSocketEndpoint and @WebSocketMessage. @WebSocketEndpoint(path="/hello")public class HelloBean {     @WebSocketMessage    public String sayHello(String name) {         return "Hello " + name + "!";     }} @WebSocketEndpoint marks this class as a WebSocket endpoint listening at URI defined by the path attribute. The @WebSocketMessage identifies the method that will receive the incoming WebSocket message. This first method parameter is injected with payload of the incoming message. In this case it is assumed that the payload is text-based. It can also be of the type byte[] in case the payload is binary. A custom object may be specified if decoders attribute is specified in the @WebSocketEndpoint. This attribute will provide a list of classes that define how a custom object can be decoded. This method can also take an optional Session parameter. This is injected by the runtime and capture a conversation between two endpoints. The return type of the method can be String, byte[] or a custom object. The encoders attribute on @WebSocketEndpoint need to define how a custom object can be encoded. The client side is an index.jsp with embedded JavaScript. The JSP body looks like: <div style="text-align: center;"> <form action="">     <input onclick="say_hello()" value="Say Hello" type="button">         <input id="nameField" name="name" value="WebSocket" type="text"><br>    </form> </div> <div id="output"></div> The code is relatively straight forward. It has an HTML form with a button that invokes say_hello() method and a text field named nameField. A div placeholder is available for displaying the output. Now, lets take a look at some JavaScript code: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/HelloWebSocket/hello";     var websocket = new WebSocket(wsUri);     websocket.onopen = function(evt) { onOpen(evt) };     websocket.onmessage = function(evt) { onMessage(evt) };     websocket.onerror = function(evt) { onError(evt) };     function init() {         output = document.getElementById("output");     }     function say_hello() {      websocket.send(nameField.value);         writeToScreen("SENT: " + nameField.value);     } This application is deployed as "HelloWebSocket.war" (download here) on GlassFish 4.0 promoted build 57. So the WebSocket endpoint is listening at "ws://localhost:8080/HelloWebSocket/hello". A new WebSocket connection is initiated by specifying the URI to connect to. The JavaScript API defines callback methods that are invoked when the connection is opened (onOpen), closed (onClose), error received (onError), or a message from the endpoint is received (onMessage). The client API has several send methods that transmit data over the connection. This particular script sends text data in the say_hello method using nameField's value from the HTML shown earlier. Each click on the button sends the textbox content to the endpoint over a WebSocket connection and receives a response based upon implementation in the sayHello method shown above. How to test this out ? Download the entire source project here or just the WAR file. Download GlassFish4.0 build 57 or later and unzip. Start GlassFish as "asadmin start-domain". Deploy the WAR file as "asadmin deploy HelloWebSocket.war". Access the application at http://localhost:8080/HelloWebSocket/index.jsp. After clicking on "Say Hello" button, the output would look like: Here are some references for you: WebSocket - Protocol and JavaScript API JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API Capturing WebSocket on-the-wire messages

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • Add a database to use with locate command

    - by Pedro Teran
    i would like to know if anyone knows how I can create a database of a file system on my computer. so I can choose this data base to search for files on this file system efficiently. I ask this question since in man locate I found that I can choose a database for a different file system. Also would be grate if /var/lib/mlocate/mlocate.db database can have the data of 2 disks any approach ideas or others would be greatly appreciated

    Read the article

  • Viewing at Impossible Angles

    - by kemer
    The picture of the little screwdriver with the Allen wrench head to the right is bound to invoke a little nostalgia for those readers who were Sun customers in the late 80s. This tool was a very popular give-away: it was essential for installing and removing Multibus (you youngsters will have to look that up on Wikipedia…) cards in our systems. Back then our mid-sized systems were gargantuan: it was routine for us to schlep around a 200 lb. desk side box and 90 lb. monitor to demo a piece of software your smart phone will run better today. We were very close to the hardware, and the first thing a new field sales systems engineer had to learn was how put together a system. If you were lucky, a grizzled service engineer might run you through the process once, then threaten your health and existence should you ever screw it up so that he had to fix it. Nowadays we make it much easier to learn the ins and outs of our hardware with simulations–3D animations–that take you through the process of putting together or replacing pieces of a system. Most recently, we have posted three sophisticated PDFs that take advantage of Acrobat 9 features to provide a really intelligent approach to documenting hardware installation and repair: Sun Fire X4800/X4800 M2 Animations for Chassis Components Sun Fire X4800/X4800 M2 Animations for Sub Assembly Module (SAM) Sun Fire X4800/X4800 M2 Animations for CMOD Download one of these documents and take a close look at it. You can view the hardware from any angle, including impossible ones. Each document has a number of procedures, that break down into steps. Click on a procedure, then a step and you will see it animated in the drawing. Of course hardware design has generally eliminated the need for things like our old giveaway tools: components snap and lock in. Often you can replace redundant units while the system is hot, but for heaven’s sake, you’ll want to verify that you can do that before you try it! Meanwhile, we can all look forward to a growing portfolio of these intelligent documents. We would love to hear what you think about them. –Kemer

    Read the article

  • Unable to add users to Microsoft Dynamics CRM 4.0 after database restore

    - by Wes Weeks
    Working with a client in our Multi-tenant CRM environment who was doing a database migration into CRM and as part of the process, a backup of their Organization_MSCRM database was taken just prior to starting the migration in case it needed to be restored and run a second time. In this case it did, so I restored the database and let the client know he should be good to go.  A few hours later I received a call that they were unable to add some new users, they would appear as available when using the add multiple user wizard, but anyone added would not be added to CRM.  It was also disucussed that these users had been added to CRM initally AFTER the database backup had been taken. I turned on tracing and tried to add the users through both the single user form and multiple user interface and was unable to do so.  The error message in the logs wasn't much help: Unexpected error adding user [email protected]: Microsoft.Crm.CrmException: INVALID_WRPC_TOKEN: Validate WRPC Token: WRPCTokenState=Invalid, TOKEN_EXPIRY=4320, IGNORE_TOKEN=False Searching on Google or bing didn't offer any assitance.  Apparently not a very common problem, or no one has been able to resolve. I did some searching in the MSCRM_CONFIG database and found that their are several user tables there and after getting my head around the structure found that there were enties here for users that were not part of the restored DB.  It seems that new users are added to both the Orgnaization_MSCRM and MSCRM_CONFIG and after the restore these were out of sync. I needed to remove the extra entries in order to address.  Restoring the MSCRM_CONFIG database was not an option as other clients could have been adding users at this point and to restore would risk breaking their instances of CRM.  Long story short, I was finally able to generate a script to remove the bad entries and when I tried to add users again, I was succesful.  In case someone else out there finds themselves in a similar situation, here is the script I used to delete the bad entries. DECLARE @UsersToDelete TABLE (   UserId uniqueidentifier )   Insert Into @UsersToDelete(UserId) Select UserId from [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where CrmuserId Not in (select systemuserid from Organization_MSCRM.dbo.SystemUserBase) And OrganizationId = '00000000-643F-E011-0000-0050568572A1' --Id From the Organization table for this instance   Delete From [MSCRM_CONFIG].[dbo].[SystemUserAuthentication]   Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUser] Where Id in (Select UserId From @UsersToDelete)

    Read the article

  • Sweden: Hot Java in the Winter

    - by Tori Wieldt
    No, it's not global warming, but for some reason Sweden is a hotbed of great Java developers and great Java conferences in the winter. First, all three Swedish Java Champions are on Computer Sweden's 100 Best Swedish Developers List. You can read the full Sweden's Top 100 Developers article *if* you can read Swedish (or want to use Google Translate). Congratulations to:  Jonas Bonér, CTO Typesafe Skills: In recent years worked with solutions for scalability and availability. Previously, most between programs and compilers. Other qualifications: Located behind the framework Aspectwerkz and Akka platform for developing parallel, scalable and fault-tolerant software in Scala and Java. Rickard Oberg, Neo Technology Skills: Java, and the framework in Java EE and graph databases. Other qualifications: Founder of open source projects Xdoclet and Webwork. The latter is now called Struts second Rickard Oberg wrote the basics of the application server JBoss. Founder of Senselogic and architect of CMS and portal product SiteVision. Launched frameworkQi4j. Been a speaker at Java Zone JavaPolis, Jfokus, Øredev. Mattias Karlsson Skills: Java. Good at agile system development methods and architecture. Activity: telecom, banking, finance and insurance. Other qualifications: Runs Javaforum Stockholm. Arranges the conference Jfokus.  Frequent speaker at major international conferences such as JavaOne. Holds the title Java Champion. Also, Sweden is home to some top-notch Java Developer conferences during the Winter: jDays Gothenburg, Sweden, Dec 3-5. jDays, a dynamic Java developer conference, comes to Gothenburg. In addition to conference and presentations, visitors can join any courses in Java and related technologies for free.  Jfokus Stockholm, Sweden, Feb 4-6. Jfokus is the largest annual conference for everyone who works with Java in Sweden. The conference is arranged together with Javaforum, the Stockholm JUG.  Thanks to all the Java community who keep the Java hot in Sweden!

    Read the article

  • GP11.1

    - by user13334066
    It's the Assen round of the 2011 motogp season, and Ducati have launched their GP11.1. The Ducati's front end woes were quite efficiently highlighted throughout the 2010 season, with both Casey and Nicky regularly visiting the gravel traps. Now the question is: was it really a front end issue. What's most probable is: the GP10 never had a front end issue. It was the rear that was out. So what did Stoner's team do? They came with setup changes that sorted out the rear end, while transferring the problem to the front. And Casey has this brilliant ability to push beyond the limits of a vague and erratic front end...and naturally the real problem lay hidden. Like Kevin Cameron said: in human nature, our strengths are our weaknesses. Casey's pure speed came at a lack of fine machinery feel, which ultimately took the Ducati in a wrong development direction.

    Read the article

  • Arrow ECS: VAD mit Weitblick

    - by A&C Redaktion
    Die Arrow ECS unterstützt Oracle Partner dabei, sich dauerhaft erfolgreich zu etablieren. Als Value Added Distributor, kurz VAD, für das Oracle Soft- und Hardware Portfolio bietet Arrow wertvolle Mehrwertdienstleistungen für Partner an, etwa in den Bereichen Consulting, Vertrieb und Produktmarketing. Der Vorteil: Die Partner können sich voll auf ihr Kerngeschäft konzentrieren. Wie die Zusammenarbeit genau funktioniert, erklären Martin Wilhelm, Manager Business Unit Enterprise Solutions, Herbert Varga vom Product Management und die Sales-Expertin für Oracle Produkte, Maria Keller, im Video. Arrow ECS steht für kompetente und zuverlässige Zusammenarbeit mit dem Partner und wurde bereits mehrfach zum Oracle Global Value Added Distributor des Jahres gekürt

    Read the article

  • June IOUG events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Independent Oracle User Group (IOUG) Regional Events: June 11-12, 2012 – Broomfield, CO 2-Day Seminar- “ High Performance PL/SQL & Oracle Database 11g New Features” Steven Feuerstein, generally considered the world’s leading PL/SQL expert, will be presenting his all-new, 2-day, “Higher Performance PL/SQL and Oracle 11g PL/SQL New Features” seminar on June 11 & 12 at Level 3 Communications in Broomfield, Colorado.  This will be Steven’s first Denver seminar in almost 4  years.  Who knows when he will offer another? http://www.rmoug.org/ June 14, 2012 – Ottawa, Ontario Pythian’s Gwen Shapira puts on 3 great presentations focused on NoSQL, making OLTP run fast and Big Data. http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:1317735724699447::NO June 21, 2012 – Calgary, Alberta Big Data and Extreme Analytics Summit http://coug.ab.ca/ June 22, 2012 – Westborough, MA 10 Things You Probably Did Not Know? With Tom Kyte PL/SQL turns 23 years old this year. It was first introduced in 1988 with Oracle6 Database. This session looks at five technical things about PL/SQL you probably did not know: under-the-covers features that make PL/SQL quite simply the most efficient language with which to process data in the database. http://noug.com/  June 28/29, 2012 – Plano, Texas Jonathan Lewis Oracle Performance Seminars The DOUG (DALLAS ORACLE USERS GROUP) has invited SpeakTech to return to Dallas, and they’re bringing Jonathan Lewis! Topics are Beating the Oracle Optimizer – June 28, 2012, Trouble Shooting & Tuning – June 29, 2012 http://www.eventbrite.com/event/3082448687

    Read the article

  • Hot Java Content

    - by Tori Wieldt
    It's August, summertime in the United States, and time for many of us to go on vacation. (You'll have to find my personal account to see more photos of the Monterey Bay Aquarium.) Here's some great Java content that you may have missed while I was gone: Blogs  Project Jigsaw: Late for the train: The Q&A JSR 355 Final Release, and moves JCP to version 2.9Oracle releases JDK for Linux ARM, JRE for Mac OS XArchitects and Architecture at JavaOne 2012Java Champions at JavaOne 2012 Podcasts & Videos Java Spotlight Episode 96: Johan Vos on Glassfish and JavaFXJava Spotlight Episode 94: Kirk Pepperdine on Java Performance TuningJava Spotlight Episode 93: Jonathan Giles on JavaFX 2.2 UI ControlsVideo: JavaFX Canvas Node July/August Java Magazine (free subscription) Developer Power: Web-based Development ToolsFork/Join Framework for Client Java ApplicationsIntro to Web Service SecurityHow to Modify javacOracle's Berkeley DB Java Edition's Java API and more. Java Magazine is available on the App Store and the Android Market. Get all this great Java content while it's as hot as a North American (non-San Franciscian) summer. 

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >