Search Results

Search found 17227 results on 690 pages for 'oracle hcm cloud'.

Page 571/690 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • Save Actions in NetBeans IDE 7.3

    - by Geertjan
    Several developers, especially those familiar with equivalent functionality in Eclipse, have been asking for so-called "Save Actions", that is, support for actions that are automatically performed when a file is saved. Here's the related NetBeans issue: http://netbeans.org/bugzilla/show_bug.cgi?id=140719   In NetBeans IDE 7.3, the issue is resolved as follows: A new "On Save" tab is found in the "Editor" tab of the Options window. Defaults for all languages are set via the "All Languages" item in the drop-down. Here, for all languages, you can specify what kind (all, none, or only modified lines) of formatting and space removal will occur automatically when a file is saved: Via the drop-down, you see all the languages supported by the IDE: You can pick a language and then override the default On Save settings: Per language, there may be additional On Save settings. For example, for Java, you can specify that, when saving a Java file, unused import statements should be removed and/or the rules you've set for organizing import statements should be applied: There's also a set of new NetBeans IDE APIs for adding new On Save functionality via custom plugins. Via MIME type registration of OnSaveTask.Factory, you can register new On Save actions that will be run for files conforming to the relevant MIME type. There's also extensions via the Editor Options API for registering new panels (one per language) to the On Save panel in the Options window. I'll demonstrate some examples of the APIs in upcoming blog entries.

    Read the article

  • Reminder: JavaOne Call For Papers Closing April 9th, 11:59pm

    - by arungupta
    JavaOne 2012 Call For Papers is closing on April 9th. Make sure to get your submissions in time and make the reviewers job exciting. Submit now! Read tips for paper submission here and an insight into the review process and more tips here. The conference will be held in San Francisco from September 30th to October 4th, 2012. And between now and this JavaOne in San Francisco, the conference is also going to Japan, Russia, and India.

    Read the article

  • Dynamically Changing the Display Names of Menus and Popups

    - by Geertjan
    Very interesting thing and handy to know when needed is the fact that "menuText" and "popupText" (from org.openide.awt.ActionRegistration) can be changed dynamically, via "putValue" as shown below for "popupText". The Action class, in this case, needs to be eager, hence you won't receive the object of interest via the constructor, but you can easily use the global Lookup for that purpose instead, as also shown below. import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.DemoProjectAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference(path = "Projects/Actions", position = 0) public final class DemoProjectAction extends AbstractAction{ private final ProjectInformation context; public DemoProjectAction() { putValue("popupText", "Select Me To See Current Time!"); context = ProjectUtils.getInformation( Utilities.actionsGlobalContext().lookup(Project.class)); } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { DateFormat formatter = new SimpleDateFormat("HH:mm:ss"); String formatted = formatter.format(System.currentTimeMillis()); putValue("popupText", "Time: " + formatted + " (" + context.getDisplayName() +")"); } } Now, let's do something semi useful and display, in the popup, which is available when you right-click a project, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.TrackProjectTimerAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectTimerAction extends AbstractAction implements FileChangeListener { private final Project context; private Long startTime; private Long changedTime; private DateFormat formatter; public TrackProjectTimerAction() { putValue("popupText", "Enable project time tracker"); this.formatter = new SimpleDateFormat("HH:mm:ss"); context = Utilities.actionsGlobalContext().lookup(Project.class); context.getProjectDirectory().addRecursiveListener(this); } @Override public void actionPerformed(ActionEvent e) { startTimer(); } protected void startTimer() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} }

    Read the article

  • APEX-Region "Karte" mit eigenen Karten ausstatten

    - by carstenczarski
    Seit der Version 4.0 bietet APEX den Diagrammtyp "Karte" an; dieser erlaubt die sehr einfache Integration von Karten in eine APEX-Anwendung. Die Darstellung der Karten basiert, wie für alle Diagrammtypen, auf AnyChart. APEX bietet zwar eine Vielfalt von verfügbaren Karten an, in der Praxis dürften diese jedoch selten ausreichen - zu verschieden sind die Anforderungen; für Deutschland werden nur zwei Karten angeboten. Oft ist es also nötig, den APEX-Lieferumfang um eigene Karten zu erweitern. Wie das geht, beschreibt unser aktueller Community-Tipp.

    Read the article

  • IRC News from #netbeans on FreeNode

    - by Geertjan
    I joined the #netbeans channel on FreeNode last week and the discussions there are really great. It's so cool to not have the endless back and forth of an e-mail exchange. Instead, you can hammer out a complete solution to a problem while chatting live in the channel. A case in point was yesterday, when someone named 'charmeleon' wanted to create a NetBeans Platform based application that includes the "image" module from the NetBeans IDE sources. That way, he'd have a starting point for his own image-oriented application, since he'd not only have the NetBeans Platform, but also the sources of the "image" module. Had we been communicating via e-mail, it would have taken weeks, at least, to come to a solution. Instead, we hashed it out together live, including some very specific problems that would have been hard to communicate about via e-mail. In the end, I made a movie showing exactly the scenario that charmeleon was interested in: And, right now, in the #netbeans channel, charmeleon said: "NetBeans RCP feels like cheating once you start getting over the hump." I'm sure the fact that the hump was handled within a few hours of chatting on irc is a big contributor to that impression.

    Read the article

  • Type Conversion in JPA 2.1

    - by delabassee
    The Java Persistence 2.1 specification (JSR 338) adds support for various new features such as schema generation, stored procedure invocation, use of entity graphs in queries and find operations, unsynchronized persistence contexts, injection into entity listener classes, etc. JPA 2.1 also add support for Type Conversion methods, sometime called Type Converter. This new facility let developers specify methods to convert between the entity attribute representation and the database representation for attributes of basic types. For additional details on Type Conversion, you can check the JSR 338 Specification and its corresponding JPA 2.1 Javadocs. In addition, you can also check those 2 articles. The first article ('How to implement a Type Converter') gives a short overview on Type Conversion while the second article ('How to use a JPA Type Converter to encrypt your data') implements a simple use-case (encrypting data) to illustrate Type Conversion. Mission critical applications would probably rely on transparent database encryption facilities provided by the database but that's not the point here, this use-case is easy enough to illustrate JPA 2.1 Type Conversion.

    Read the article

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • New RUP Patch for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM)

    - by LuciaC
    Just released - the 12.1.3 Rollup (RUP) Patch 17525552:R12.PRC_PF.B for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM). Who should apply this patch? Anyone that is on Release 12.1.3 and is using  iSupplier Portal, Sourcing or Supplier Lifecycle Management (SLM) functionality. The following areas have had major fixes: Prospective Supplier Guided Navigation: The train-navigation is introduced for prospective supplier registration so that prospective suppliers can see all steps needed to successfully register themselves. Supplier Registration Workflow Enhancement: With this release, provided the Approval Management Engine (AME) action required notifications for supplier approval, so that all workflow related features can be enabled. Vacation rules can be set, approvals can be forwarded and more information can be requested through the notification itself.  Additionally AME parallel Approval support for Supplier Registration approvals has been added. Reinstate Supplier Request: Allow buyer to reopen/reinstate the rejected supplier. Supplier is able to access their previously rejected registration again and make changes and resubmit request. Contact Address Association: The prospective supplier is allowed to associate addresses with contacts (including Primary) during the prospective supplier registration process. Primary Contact Enhancement: The prospective supplier can be registered without creating a user account for the primary contact. Mandatory Attributes: In the negotiation requirement creation page, the lookup meaning of 'Internal' has been changed to 'Internal Optional', and a new lookup value with meaning as 'Internal Required' has been added. The values available in the 'Type' dropdown now are Display Only, Internal Optional, Internal Required, Supplier Optional and Supplier Required.  So now during supplier evaluations, internal user response can be set as mandatory by using Internal Required type during requirement creation. Notifications to Supplier:  When the supplier saves and submits their supplier registration request, then a notification with a registration status page link will be sent for further access.  When the buyer approves, rejects or returns the request, the supplier will be notified in an email with the current status. There are also 10 major enhancements included in this RUP. For information about this RUP; including, the fixes and enhancements included, how to access and apply the patch, performing an impact analysis on your system, and testing recommendations, see Doc ID 1591198.1.  Don’t delay apply the patch today!

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • JSR updates - October 2013

    - by Heather VanCura
    A handful of JSRs have been making  progress in the JCP program--Java SE, Java ME and Java EE JSRs.  More to come in the next few weeks! Highlights and links to JSR material below. JSR 337,  Java SE 8 Release Contents, published an Early Draft Review. JSR 351, Java Identity API, published an Early Draft Review. JSR 360, Connected Limited Device Configuration (CLDC) 8, passed the EC Public Review Ballot with 21 yes votes. JSR 361, Java ME Embedded Profile, passed the EC Public Review Ballot with 20 yes votes. JSR 107, JCACHE-Java Temporary Caching API, published an update to their JSR Community Update Page.  You can find schedule information (plans to submit Proposed Final Draft very soon), Adopt-a-JSR suggestions, and presentation material from JavaOne.

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • For business information and web traffic T4 and Solaris 11 stand head and shoulders above the crowd

    - by rituchhibber
    Everyone is talking about encryption of business information and web traffic. T4 and Solaris 11 stand head and shoulders above the crowd. Each T4 chip has 8 crypto accelerators inside the chip - that means there are 32 in a T4-4.  These are faster and offer more algorithms than almost all standalone devices and it is all free with T4!  What are you waiting for?Please contact Lucy Hillman or Graham Scattergood for more details.Your weekly tea time soundbite of the latest UK news, updates and initiatives on the SPARC T Series servers. T4 good news, best practice and feedback is always welcome.

    Read the article

  • New Write Flash SSDs and more disk trays

    - by Steve Tunstall
    In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet. The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these. They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.    Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you? Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), but you only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you are going to have two different speeds of drives, in other words you want to mix 15K speed and 7,200 speed drives in the same system, I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 2)

    - by Geertjan
    Now let's create the start of a CRUD application on the NetBeans Platform, using Hibernate and PostgreSQL to do so. Here's what I see in NetBeans IDE after setting things up as outlined yesterday: The NetBeans Platform CRUD Tutorial should get you up and started creating the NetBeans Platform application. Open the generated "persistence.xml" in Design mode and then switch the persistence library to Hibernate. The Here's the application structure: The Hibernate module that you see above has this content: Here's the result: And here's the source code: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/NBPostgreSQL

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Java University Pre-Conference Training Is Back

    - by Tori Wieldt
    The Java University pre-conference training will be back again this year at JavaOne!  Java University gives conference attendees a chance to get even more from your conference experience by giving you the option to attend a full day of in-depth Java training delivered by the experts on the Sunday prior to the conference.  We are working to make this event even better in 2012 and want to find out which technical sessions you would like to see offered.  Web Services? Developing Rich Client Applications with Java SE 7? Developing Portable Java EE Applications with the Enterprise JavaBeans 3.1 API and Java Persistence API 2.0? Performance Tuning?  There are so many hot topics to choose from we need your help to decide.  Get a preview of the full list of sessions we are considering and tell us which ones pique your interest most in our short survey.  There are only four questions so it shouldn’t take you much time and it will help make for a better event in 2012.

    Read the article

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly. To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.   Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.  Once added, you can assign the roles to your content or folder. If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't! [Extra] Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • JCP Open EC Meeting on 30 September 2012

    - by heathervc
    The JCP program office and Executive Committee invites all Java Community members to attend the OPEN EC Meeting on Sunday, 30 September at the Clift Hotel in San Francisco.  The meeting is adjacent to The Zone at JavaOne, but no JavaOne (or any other kind) of pass is required to attend.  It is OPEN to all!  Agenda topics include: JCP.Next status/overview of JSRs 355 and 358, improving communications between the EC and the community; Open Q&A and reminders of JCP events at JavaOne & Annual awards.  Any other suggestions?  This meeting is for you.  Let us know your questions pmo at jcp.org. Or bring them with you.  Details below. JCP Public Executive Committee Face-to-Face Meeting Open to Executive Committee Members and the Java Developer Community Location: Clift Hotel, 495 Geary Street, San Francisco - Rita Room (downstairs from Lobby) Date and Time: 9/30/12, 2:00 PM - 3:30 PM See you there.  Check out all of the JCP @ JavaOne events.

    Read the article

  • Feed Reader Fix

    - by Geertjan
    In the FeedReader sample (available in the New Projects window), there's this piece of code: private static Feed getFeed(Node node) { InstanceCookie ck = node.getLookup().lookup(InstanceCookie.class); if (ck == null) { throw new IllegalStateException("Bogus file in feeds folder: " + node.getLookup().lookup(FileObject.class)); } try { return (Feed) ck.instanceCreate(); } catch (ClassNotFoundException ex) { Exceptions.printStackTrace(ex); } catch (IOException ex) { Exceptions.printStackTrace(ex); } return null; } Since 7.1, for some reason, the above doesn't work. What does work, and is simpler, is this, instead of the above: private static Feed getFeed(Node node) { Feed f = FileUtil.getConfigObject("RssFeeds/sample.instance", Feed.class); if (f == null) { throw new IllegalStateException("Bogus file in feeds folder: " + node.getLookup().lookup(FileObject.class)); } return f; } So, the code needs to be fixed in the sample.

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • LOV-Basierte, dynamische Formular-Schnellauswahlen (Quick Picks)

    - by carstenczarski
    Schnellauswahlen (Quick Picks) gibt es bereits seit den Anfängen von Application Express. Im Application Builder werden Schnellauswahlen recht intensiv genutzt. Ein Klick auf die Schnellauswahl - und der Eintrag wird in der Auswahlliste sofort angewählt oder ins Textfeld gesetzt. Schnellauswahlen können auch in eigenen Anwendungen genutzt werden: Bei den Eigenschaften zu jedem Formularelement gibt es den Abschnitt Schnellauswahlen oder Quick Picks. Vom Endanwender häufiger gebrauchte Einträge eignen sich sehr gut zur Aufnahme in die Schnellauswahlen. Allerdings werden Schnellauswahlen stets als statische Einträge konfiguriert - das bringt einige Nachteile mit sich. Bei Änderungen muss stets der APEX-Entwickler aktiv werden Einträge können nicht wiederverwendet werden Als Trennzeichen wird stets ein Komma verwendet - das kann nicht beeinflusst werden Dynamisch generierte oder gar berechnete Einträge sind nur auf dem Umweg über ausgeblendete APEX Elemente möglich Dieser Tipp stellt ein APEX-Plugin vor, welches dynamische Schnellauswahlen, also Schnellauswahlen auf Basis einer Werteliste oder SQL-Abfrage, ermöglicht.

    Read the article

  • QotD: Eben Upton on Raspberry Pi Model B Shipping With 512MB of RAM

    - by $utils.escapeXML($entry.author)
    One of the most common suggestions we’ve heard since launch is that we should produce a more expensive “Model C” version of Raspberry Pi with extra RAM. This would be useful for people who want to use the Pi as a general-purpose computer, with multiple large applications running concurrently, and would enable some interesting embedded use cases (particularly using Java) which are slightly too heavyweight to fit comfortably in 256MB.The downside of this suggestion for us is that we’re very attached to $35 as our highest price point. With this in mind, we’re pleased to announce that from today all Model B Raspberry Pis will ship with 512MB of RAM as standard.Eben Upton, a founder and trustee of the Raspberry Pi foundation, in a blog post announcing the change.

    Read the article

  • ndd on Solaris 10

    - by user12620111
    This is mostly a repost of LaoTsao's Weblog with some tweaks. Last time that I tried to cut & paste directly off of his page, some of the XML was messed up. I run this from my MacBook. It should also work from your windows laptop if you use cygwin. ================If not already present, create a ssh key on you laptop================ # ssh-keygen -t rsa ================ Enable passwordless ssh from my laptop. Need to type in the root password for the remote machines. Then, I no longer need to type in the password when I ssh or scp from my laptop to servers. ================ #!/usr/bin/env bash for server in `cat servers.txt` do   echo root@$server   cat ~/.ssh/id_rsa.pub | ssh root@$server "cat >> .ssh/authorized_keys" done ================ servers.txt ================ testhost1testhost2 ================ etc_system_addins ================ set rpcmod:clnt_max_conns=8 set zfs:zfs_arc_max=0x1000000000 set nfs:nfs3_bsize=131072 set nfs:nfs4_bsize=131072 ================ ndd-nettune.txt ================ #!/sbin/sh # # ident   "@(#)ndd-nettune.xml    1.0     01/08/06 SMI" . /lib/svc/share/smf_include.sh . /lib/svc/share/net_include.sh # Make sure that the libraries essential to this stage of booting  can be found. LD_LIBRARY_PATH=/lib; export LD_LIBRARY_PATH echo "Performing Directory Server Tuning..." >> /tmp/smf.out # # Standard SuperCluster Tunables # /usr/sbin/ndd -set /dev/tcp tcp_max_buf 2097152 /usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 1048576 /usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 1048576 # Reset the library path now that we are past the critical stage unset LD_LIBRARY_PATH ================ ndd-nettune.xml ================ <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <!-- ident "@(#)ndd-nettune.xml 1.0 04/09/21 SMI" --> <service_bundle type='manifest' name='SUNWcsr:ndd'>   <service name='network/ndd-nettune' type='service' version='1'>     <create_default_instance enabled='true' />     <single_instance />     <dependency name='fs-minimal' type='service' grouping='require_all' restart_on='none'>       <service_fmri value='svc:/system/filesystem/minimal' />     </dependency>     <dependency name='loopback-network' grouping='require_any' restart_on='none' type='service'>       <service_fmri value='svc:/network/loopback' />     </dependency>     <dependency name='physical-network' grouping='optional_all' restart_on='none' type='service'>       <service_fmri value='svc:/network/physical' />     </dependency>     <exec_method type='method' name='start' exec='/lib/svc/method/ndd-nettune' timeout_seconds='3' > </exec_method>     <exec_method type='method' name='stop'  exec=':true'                       timeout_seconds='3' > </exec_method>     <property_group name='startd' type='framework'>       <propval name='duration' type='astring' value='transient' />     </property_group>     <stability value='Unstable' />     <template>       <common_name>     <loctext xml:lang='C'> ndd network tuning </loctext>       </common_name>       <documentation>     <manpage title='ndd' section='1M' manpath='/usr/share/man' />       </documentation>     </template>   </service> </service_bundle> ================ system_tuning.sh ================ #!/usr/bin/env bash for server in `cat servers.txt` do   cat etc_system_addins | ssh root@$server "cat >> /etc/system"   scp ndd-nettune.xml root@${server}:/var/svc/manifest/site/ndd-nettune.xml   scp ndd-nettune.txt root@${server}:/lib/svc/method/ndd-nettune   ssh root@$server chmod +x /lib/svc/method/ndd-nettune   ssh root@$server svccfg validate /var/svc/manifest/site/ndd-nettune.xml   ssh root@$server svccfg import /var/svc/manifest/site/ndd-nettune.xml done

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >