Search Results

Search found 15216 results on 609 pages for 'oracle streams'.

Page 524/609 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • HTML Tidy in NetBeans IDE

    - by Geertjan
    First step in integrating HTML Tidy (via its JTidy implementation) into NetBeans IDE: The reason why I started doing this is because I want to integrate this into the pluggable analyzer functionality of NetBeans IDE that I recently blogged about, i.e., where the FindBugs functionality is found. So a logical first step is to get it working in an Action class, after which I can port it into the analyzer infrastructure: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.cookies.EditorCookie; import org.openide.cookies.LineCookie; import org.openide.loaders.DataObject; import org.openide.text.Line; import org.openide.text.Line.ShowOpenType; import org.openide.util.Exceptions; import org.openide.util.NbBundle.Messages; import org.openide.windows.IOProvider; import org.openide.windows.InputOutput; import org.openide.windows.OutputEvent; import org.openide.windows.OutputListener; import org.openide.windows.OutputWriter; import org.w3c.tidy.Tidy; @ActionID(     category = "Tools", id = "org.jtidy.TidyAction") @ActionRegistration(     displayName = "#CTL_TidyAction") @ActionReferences({     @ActionReference(path = "Loaders/text/html/Actions", position = 150),     @ActionReference(path = "Editors/text/html/Popup", position = 750) }) @Messages("CTL_TidyAction=Run HTML Tidy") public final class TidyAction implements ActionListener {     private final DataObject context;     private final OutputWriter writer;     private EditorCookie ec = null;     public TidyAction(DataObject context) {         this.context = context;         ec = context.getLookup().lookup(org.openide.cookies.EditorCookie.class);         InputOutput io = IOProvider.getDefault().getIO("HTML Tidy", false);         io.select();         writer = io.getOut();     }     @Override     public void actionPerformed(ActionEvent ev) {         Tidy tidy = new Tidy();         try {             writer.reset();             StringWriter stringWriter = new StringWriter();             PrintWriter errorWriter = new PrintWriter(stringWriter);             tidy.setErrout(errorWriter);             tidy.parse(context.getPrimaryFile().getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (final String string : split) {                 final int end = string.indexOf(" c");                 if (string.startsWith("line")) {                     writer.println(string, new OutputListener() {                         @Override                         public void outputLineAction(OutputEvent oe) {                             LineCookie lc = context.getLookup().lookup(LineCookie.class);                             int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                             Line line = lc.getLineSet().getOriginal(lineNumber - 1);                             line.show(ShowOpenType.OPEN, Line.ShowVisibilityType.FOCUS);                         }                         @Override                         public void outputLineSelected(OutputEvent oe) {}                         @Override                         public void outputLineCleared(OutputEvent oe) {}                     });                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }     } } The string parsing above is ugly but gets the job done for now. A problem integrating this into the pluggable analyzer functionality is the limitation of its scope. The analyzer lets you select one or more projects, or individual files, but not a folder. So it doesn't work on folders in the Favorites window, for example, which is where I'd like to apply HTML Tidy, across multiple folders via the analyzer functionality. That's a bit of a bummer that I'm hoping to get around somehow.

    Read the article

  • New Write Flash SSDs and more disk trays

    - by Steve Tunstall
    In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet. The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these. They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.    Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you? Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), but you only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you are going to have two different speeds of drives, in other words you want to mix 15K speed and 7,200 speed drives in the same system, I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 

    Read the article

  • My Obligatory IPad Post

    - by mark.wilcox
    I've had my IPad for about a week now. So I thought I'd write some thoughts down based on my initial experiences. Here are my initial take-aways: 1 - Netflix OnDemand - I'm a movie junkie. I'm now more apt to just start a movie as background sound for my workday (I telecommute - so except for the occasional bark from my dog, it's awfully quiet here if I don't have something going). 2 - The Email Client is really nice and I'm as fast or faster typing when I have the wireless keyboard engaged. Even with onscreen keyboard - I'm already close to 75% of desktop speed 3 - The battery life is incredible - I think this is the first case where a mobile device actually under-promised on battery 4 - It totally has killed the notion of using a normal PC for my wife and mother-in-law - neither of which had wanted an iPhone/iPod Touch or really any Apple device until they got to play with my iPad. The concept of - instant on, easy to hold and touch-based navigation has them hooked. Heck, it has me hooked. My ultimate goal is to be able to have it at least replace the need to take my netbook with me on the road. I haven't had a chance to complete my testing on that front yet - between work, my wife traveling (for a change) and now my wife home sick - I haven't had time to just play with it. But so far my only regret - that I haven't already bought two more for everyone else in my family who wants to use mine. Posted via email from Virtual Identity Dialogue

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Inactive JSRs looking for Spec Leads

    - by heathervc
    You may have noticed that some JSRs have a classification of "Inactive" on their JSR page.  The introduction of this term in 2009 was part of an effort to enable and encourage more transparency into the development of JSRs.  You can read more about Inactive JSRs here and also in the JCP FAQ.The following JSR proposals have been Inactive since at least 2009. If you are a JCP Member and are interested in taking over the Specification Lead role for one of these JSRs, please contact the PMO at [email protected] on or before 23 April 2012. With that message, please include the following: the subject line "Spec Lead for JSR ###," where '###' is the JSR number which JCP Member you represent why you wish to take over the Specification lead role Here is the current list of Inactive JSRs for which Members can request to become Specification Leads: JSR 122, JAIN JCAT JSR 161, JAIN ENUM API Specification JSR 182, JPay - Payment API for the Java Platform JSR 210, OSS Service Quality Management API JSR 241, The Groovy Programming Language JSR 251, Pricing API JSR 278, Resource Management API for Java ME JSR 304, Mobile Telephony API v2 JSR 305, Annotations for Software Defect Detection JSR 320, Services Framework

    Read the article

  • Don't Miss OPN Exchange @ OpenWorld: Register NOW!

    - by Cinzia Mascanzoni
    Don't miss the opportunity to register to OPN Exchange @ OpenWorld at Early Bird pricing : $ 595. The promotion will end September 7th. OPN Exchange is the only pass that gives you access to more than 40 Partner dedicated sessions, held Monday-Thursday, to OPN Lounge and OPN Test Fest. If you have registered with Full Conference pass, here is what you can do to add OPN Exchange to your registration: Go to My Account and add (add-on section) the OPN Exchange pass for $ 100. If you have registered for a Discover Pass: contact the registration team and ask for the upgrade at Tel: +1.650.226.0812 (International) Monday through Friday, 6:00 a.m. to 6:00 p.m. (Pacific time) or Email: [email protected].

    Read the article

  • JCP Open EC Meeting on 30 September 2012

    - by heathervc
    The JCP program office and Executive Committee invites all Java Community members to attend the OPEN EC Meeting on Sunday, 30 September at the Clift Hotel in San Francisco.  The meeting is adjacent to The Zone at JavaOne, but no JavaOne (or any other kind) of pass is required to attend.  It is OPEN to all!  Agenda topics include: JCP.Next status/overview of JSRs 355 and 358, improving communications between the EC and the community; Open Q&A and reminders of JCP events at JavaOne & Annual awards.  Any other suggestions?  This meeting is for you.  Let us know your questions pmo at jcp.org. Or bring them with you.  Details below. JCP Public Executive Committee Face-to-Face Meeting Open to Executive Committee Members and the Java Developer Community Location: Clift Hotel, 495 Geary Street, San Francisco - Rita Room (downstairs from Lobby) Date and Time: 9/30/12, 2:00 PM - 3:30 PM See you there.  Check out all of the JCP @ JavaOne events.

    Read the article

  • Improved Maven Embedded GlassFish - deploy multiple apps

    - by alexismp
    Bhavani has some new over at java.net about the Maven Plugin for GlassFish and how it now supports the ability to deploy multiple applications. He also has a Tips, Tricks and Troubleshooting entry. Multiple deployments are done during the Maven pre-integration-test phase but with a goal-specific configuration for app, contextRoot, etc... The :run (all-in-one) execution also now supports admin and deploy goals. Note that these improvements will require a recent work-in-progress 4.0 version of GlassFish.

    Read the article

  • Feed Reader Fix

    - by Geertjan
    In the FeedReader sample (available in the New Projects window), there's this piece of code: private static Feed getFeed(Node node) { InstanceCookie ck = node.getLookup().lookup(InstanceCookie.class); if (ck == null) { throw new IllegalStateException("Bogus file in feeds folder: " + node.getLookup().lookup(FileObject.class)); } try { return (Feed) ck.instanceCreate(); } catch (ClassNotFoundException ex) { Exceptions.printStackTrace(ex); } catch (IOException ex) { Exceptions.printStackTrace(ex); } return null; } Since 7.1, for some reason, the above doesn't work. What does work, and is simpler, is this, instead of the above: private static Feed getFeed(Node node) { Feed f = FileUtil.getConfigObject("RssFeeds/sample.instance", Feed.class); if (f == null) { throw new IllegalStateException("Bogus file in feeds folder: " + node.getLookup().lookup(FileObject.class)); } return f; } So, the code needs to be fixed in the sample.

    Read the article

  • Yet another GlassFish 3.1.1 promoted build

    - by alexismp
    Promoted build #9 for GlassFish 3.1.1 is available from the usual location. This is the "soft code freeze" build with only the "hard code freeze" build left before the release candidate. So if you have bugs you'd like to see fixed, voice your opinion *now*. As a quick reminder, GlassFish offers Web Profile or Full Platform distributions in ZIP or installer flavors (some more details in this blog post from last year but still relevant). If you've installed previous promoted builds or simply have the "dev" repository defined, then the Update Center will simply update the existing installed bits. In addition to the earlier update on 3.1.1 it's probably safe to say that this version was carefully designed to be highly compatible with the previous 3.x versions, thus leaving you with little reasons not to upgrade as soon as it comes out this summer.

    Read the article

  • Making the Move to PeopleSoft Projects (ESA) 9.1

    The PeopleSoft Projects (ESA) 9.1 release is built to enable the successful selection and execution of projects. With distinct attention to both user and IT productivity, PeopleSoft Projects 9.1 unveils a new Web 2.0 user interface, facilities to enable rapid business process re-configuration, and tools to strengthen project governance. Join us to hear more about these topics and the industry-specific functionality that are compelling customers to make the move to PeopleSoft Projects 9.1.

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 2)

    - by Geertjan
    Now let's create the start of a CRUD application on the NetBeans Platform, using Hibernate and PostgreSQL to do so. Here's what I see in NetBeans IDE after setting things up as outlined yesterday: The NetBeans Platform CRUD Tutorial should get you up and started creating the NetBeans Platform application. Open the generated "persistence.xml" in Design mode and then switch the persistence library to Hibernate. The Here's the application structure: The Hibernate module that you see above has this content: Here's the result: And here's the source code: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/NBPostgreSQL

    Read the article

  • Nokia at JavaOne

    - by Tori Wieldt
    Nokia has long been a key partner for Java Mobile, and they continue investing significantly in Java technologies. Developers can learn more about Nokia's popular Asha phone and developer platform at JavaOne. In addition to interesting technical material, all Nokia sessions will include giveaways (hint: be engaged and ask questions!). Don't miss these great sessions: CON4925 The Right Platform with the Right Technology for Huge Markets with Many Opportunities CON11253 In-App Purchasing for Java ME Apps BOF4747 Look Again: Java ME's New Horizons of User Experience, Service Model, and Internet Innovation BOF12804 Reach the Next Billion with Engaging Apps: Nokia Asha Full Touch for Java ME Developers CON6664 on Mobile Java, Asha, Full Touch, Maps APIs, LWUIT, new UI, new APIs and more CON6494 Extreme Mobile Java Performance Tuning, User Experience, and Architecture BOF6556 Mobile Java App Innovation in Nigeria

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • Simple HTML5 Friendly Markup Sample

    - by Geertjan
    From a demo done by David Heffelfinger (who has a great Java EE 7 screencast series here), on HTML5 friendly markup. index.xhtml:  <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:jsf="http://xmlns.jcp.org/jsf"> <title>Data Entry Page</title> <body> <form method="POST" jsf:id='form'> <table> <tr> <td>Name:</td> <td><input jsf:id='name' type="text" jsf:value="${person.name}" /></td> </tr> <tr> <td>City</td> <th><input jsf:id='city' type="text" jsf:value="${person.city}"/></th> </tr> <tr> <td><input type="submit" value="Submit" jsf:action="confirmation" /></td> </tr> </table> </form> </body> </html> confirmation.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Data Confirmation Page</title> </head> <body> <h1>#{person.name}</h1> from <h2>#{person.city}</h2> </body> </html> Person.java: package org.demo; import javax.enterprise.inject.Model; @Model public class Person { String name; String city; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } }

    Read the article

  • Adding an existing Control Domain to a Server Pool

    - by Owen Allen
    I got a question about LDoms: "Is it possible to move a Control Domain built through Ops Center with pre-existing LDoms into a server pool? If so, do I need to delete and recreate anything?" Yes, you can do this. You have to stop the LDom guests, and then you can add the CDom to a Server Pool. If the guests are using shared storage, you should be able to bring them up in the Server Pool. If the guests are not on shared storage, you can use the Migrate Storage option to bring their storage in.

    Read the article

  • Migrating Spring to Java EE 6 Article Series at OTN - Part 3

    - by arungupta
    The spring season is characterized by migration of birds, whales, butterflies, frogs, and other animals for different reasons. If you use Spring framework and are interested in migrating to a standards-based Java EE platform, for whatever reason, then we have a solution for you. David Heffelfinger's, a renowned author and an ardent Java EE fan, has published third part of Spring to Java EE migration series at OTN. The article series takes a typical Spring application and shows how to migrate it to Java EE 6 using NetBeans. This new part builds upon part 1 and part 2 and also compares the generated WAR files and LoC in XML configuration in the two environments. There is an interesting discussion on Why Java EE 6 over Spring ? as well.

    Read the article

  • MultiSelectChoice: How to get underlying values selected

    - by Vijay Mohan
    Let's say you include a multiselectchoice component in your jspx/jsff page, which has <f;selectItem> or <af:forEach> binded to a VO iterator to populate the multiselectchoice and the value property of which is binded to a List attribute binding.When the user selects some items in that choice List then u want the actual values to be posted.You can check the valuepassthrough flag to true , but many a times it doesn't help and you end up getting the indexes of multiselect values.Here is a way to get the actual values..Lets say in the bean u have a utility method to achieve this as follows..You can associate a valueChangeListener for the multiselectchoice as follows..public void onValueChangeOfLOV(ValueChangeEvent valueChangeEvent) { //get array of indexes of selected items in master list List valueIndexes = (List)valueChangeEvent.getNewValue(); String concatCodes = returnSelectmanyChoiceValues(valueIndexes,"YourIterator", "YourAttribute"); } public String returnSelectmanyChoiceValues(List valueIndexes,String iterName, String idAttrName){ DCBindingContainer dc = (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry(); DCIteratorBinding iter = dc.findIteratorBinding(iterName); ViewObject vo = iter.getViewObject(); String codes = ""; for(Object index : valueIndexes){ String iIndex = (String)index; Row row = vo.getRowAtRangeIndex(Integer.parseInt(iIndex)); codes = codes +(String)row.getAttribute(idAttrName)+","; } //remove last "," if(codes.endsWith(",")) codes = codes.substring(0,codes.lastIndexOf(",")); return codes; }This will return u a comma separated values of the selected items. if you want thenYou can store it in a List.

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • Running external commands improved a bit

    - by Tomas Mysik
    Hi all, today we would like to show you one small improvement related to running external commands (e.g. generating documentation, running framework commands etc.) which will be available in NetBeans 7.3. First, have a look at the screenshot: As you can see, the first line represents the command that is being executed. In case of any error, this command can be easily copy & pasted to the console for deeper investigation (and proper bug reports ;). Also please notice that the Output window now supports background colors. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans Bugzilla (component php, subcomponent Code).

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >