Search Results

Search found 14426 results on 578 pages for 'oracle procurement'.

Page 504/578 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • EclipseLink does multitenancy. Today.

    - by alexismp
    So you heard Java EE 7 will be about the cloud, but that didn't mean a whole lot to you. Then it was characterized as PaaS, something in between IaaS and SaaS. And finally it all became clear when referenced as support for multitenancy. Or did it? When it comes to JPA and persistence is general, multitenancy is defined as the ability to share a database schema among various groups of users (i.e. tenants). This means that there is no database setup or reconfiguration required as the data is co-located in the same database. EclipseLink 2.3 (the Indigo train release) let's you do just that by supporting tenant discriminator column(s) via annotations or XML with applications providing values for these discriminators via an API or PU configuration. Check out details here. EclipseLink 2.3 is scheduled to be the default and supported JPA provider for GlassFish 3.1.1. Another nice feature of this release is the ability to extend persistence units on the fly. The GlassFish Podcast has an interview up with EclipseLink's Doug Clarke. Expect more on multitenancy across the Java EE spectrum as the specification work progressed.

    Read the article

  • Schema Based Code Completion for NetBeans Platform Applications

    - by Geertjan
    Toni's recent blog entry provides, among several other interesting things, instructions for something I've been wanting to cover for a long time, which is schema based code completion: The above is a sample I created via Toni's tutorial, using the schema described here: http://www.w3schools.com/schema/schema_example.asp The support for the Navigator ain't bad either, especially considering I didn't do any coding at all to get all this: And here's where you can find the whole sample: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/ShipOrder

    Read the article

  • Reminder: GlassFish 3.1 Clustering Webinar Today!

    - by alexismp
    Quick reminder for those of you that missed the GlassFish Clustering Webinar from March, we have a new session today (June 28th, 2011). The session is planned at 10:00 a.m. PT / 1:00 p.m. ET / 19.00 CT and you'll need to register first. John Clingan, Principal Product Manager for GlassFish, will walk you through the various clustering features introduced and enhanced in version 3.1. This includes the SSH-based provisioning of clusters (never log in to any machine again), the centralized administration, High Availability and smart failover, load-balancer, Domain Admin Server (DAS) performance improvements, cluster deployments and more. Other than learning about these new product features this is also your chance to ask questions to John and other GlassFish team members. See you there!

    Read the article

  • How To: Drag & Drop in einer APEX Kalenderregion

    - by carstenczarski
    Wussten Sie schon, dass Sie Einträge in einer APEX-Kalenderregion mit Drag & Drop verschieben können - ganz wie in bekannten Anwendungen wie Microsoft Outlook oder Mozilla Thunderbird? Und dazu müssen Sie noch nicht einmal aufwändiges JavaScript programmieren - APEX bringt nahezu alles nötige out-of-the-box mit. Lesen Sie in unserem aktuellen Tipp, wie Sie Drag & Drop in einer APEX-Kalenderregion aktivieren - und das in wenigen Minuten. Übrigens: Schon zum APEX-Entwicklertag angemeldet ...?

    Read the article

  • Look after your tribe of Pygmies with Java ME technology

    - by hinkmond
    Here's a game that is crossing over from the iDrone to the more lucrative Java ME cell phone market. See: Pocket God on Java ME Here's a quote: Massive casual iPhone hit Pocket God has parted the format waves and walked over to the land of Java mobiles, courtesy of AMA. The game sees you take control of an omnipotent, omnipresent, and (possibly) naughty deity, looking after your tribe of Pygmies... Everyone knows that there are more Java ME feature phones than grains of sand on a Pocket God island beach. So, when iDrone games are done piddlying around on a lesser platform, they move over to Java ME where things are really happening. Hinkmond

    Read the article

  • Another Twig Improvements

    - by Ondrej Brejla
    Hi all! We are here again to intorduce you some of our new NetBeans 7.3 features. Today we'll show you some another Twig improvements. So let's start! Code Templates First feature is about Code Templates. We added some basic templates to improve your Editor experience. You will be really fast with it! If someone don't know what Code Templates are, they are piece of code (snippet) which is inserted into editor after typing its abbreviation and pressing Tab key (or another one which you define in Tools -> Options -> Editor -> Code Templates -> Expand Template on) to epxand it. All default Twig Code Templates can be found in Tools -> Options -> Editor -> Code Templates -> Twig Markup. You can add your custom templates there as well. Note: Twig Markup code templates have to be expanded inside Twig delimiters (i.e. { and }). If you try to expand them outside of delimiters, it will not work, because then you are in HTML context. If you want to add a template which will contain Twig delimiter too, you have to add it directly into Tools -> Options -> Editor -> Code Templates -> HTML/XHTML. Don't add them into Twig File, it will not work. Interpolation Coloring The second, minor, feature is, that we know how to colorize Twig Interpolation. It's a small feature, but usefull :-) And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Twig). Thanks a lot!

    Read the article

  • Add-ons for Firefox - Java Plugin has been blocked JRE versions below 1.6.0_31 or between 1.7.0 and 1.7.0_2

    - by user702295
    As Java 1.6u31 is not certified for use with EBS or Demantra, you may notice issues in relation to the Java plug-in.  Demantra Development is currently working to certify Java 1.6u31.  They are recommending that you upgrade to that version. EBS customers, should not be installing 1.6u31 as it is not certified.  If you do upgrade your browser, you will either need to downgrade to a lower release of Firefox or find a way of allowing Firefox to use the older version of the Java Plug-in.

    Read the article

  • Spotlight on GlassFish 4.1: #7 WebSocket Session Throttling and JMX Monitoring

    - by delabassee
    'Spotlight on GlassFish 4.1' is a series of posts that highlights specific enhancements of the upcoming GlassFish 4.1 release. It could be a new feature, a fix, a behavior change, a tip, etc. #7 WebSocket Session Throttling and JMX Monitoring GlassFish 4.1 embeds Tyrus 1.8.1 which is compliant with the Maintenance Release of JSR 356 ("WebSocket API 1.1"). This release also brings brings additional features to the WebSocket support in GlassFish. JMX Monitoring: Tyrus now exposes WebSocket metrics through JMX . In GF 4.1, the following message statistics are monitored for both sent and received messages: messages count messages count per second average message size smallest message size largest message size Those statistics are collected independently of the message type (global count) and per specific message type (text, binary and control message). In GF 4.1, Tyrus also monitors, and exposes through JMX, errors at the application and endpoint level. For more information, please check Tyrus JMX Monitoring Session Throttling To preserve resources on the server hosting websocket endpoints, Tyrus now offers ways to limit the number of open sessions. Those limits can be configured at different level: per whole application per endpoint per remote endpoint address (client IP address)   For more details, check Tyrus Session Throttling. The next entry will focus on Tyrus new clients-side features.

    Read the article

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Generated Methods with Type Hints

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you just another feature from upcoming NetBeans 7.3. It's about generating setters, constructors and type hints of their parameters. For years, you can use Insert Code action to generate setters, getters, constructors and such. Nothing new. But from NetBeans 7.3 you can generate Fluent Setters! What does it mean? Simply that $this is returned from a generated setter. This is how it looks like: But that's not everything :) As you know, before a method is generated, you have to choose a field, which will be associated with that method (in case of constructors, you choose fileds which should be initialized by that constructor). And from NetBeans 7.3, type hints are generated automatically for these parameters! But only if a proper PHPDoc is used in a corresponding field declaration, of course. Here is how it looks like. And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks a lot!

    Read the article

  • SPARC Go To Market Webinar am 21. Juni

    - by A&C Redaktion
    Hiermit möchten wir Sie herzlich zum weltweiten SPARC Go To Market Webinar am 21. Juni, 17:00 Uhr CET einladen. Unser Sprecher, Bud Koch, Senior Principal Product Marketing Director, wird Ihnen in diesem Online-Event einen Überblick über das SPARC / T4 Marketing geben. Er stellt dabei die aktuelle Materialien vor und zeigt Ihnen, was im Fiskaljahr 2013 geplant ist. So bekommen Sie einen Einblick und die richtige Vertriebsunterstützung. Weitere Informationen zum Webinar finden Sie hier. Wir bitten Sie, sich schon ein paar Minuten vorher einzuwählen, damit das Webinar pünktlich beginnen kann. Sollten Sie nicht live dabei sein können, wird es im Anschluss eine Aufzeichnung geben, die wir hier im Blog teilen werden.

    Read the article

  • Tip #15: How To Debug Unit Tests During Maven Builds

    - by ByronNevins
    It must be really really hard to step through unit tests in a debugger during a maven build.  Right? Wrong! Here is how i do it: 1) Set up these environmental variables: MAVEN_OPTS=-Xmx1024m -Xms256m -XX:MaxPermSize=512mMAVEN_OPTS_DEBUG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m  -Xdebug (no line break here!!)  -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9999MAVEN_OPTS_REG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m 2) create 2 scripts or aliases like so:  maveny.bat: set MAVEN_OPTS=%MAVEN_OPTS_DEBUG% mavenn.bat: set MAVEN_OPTS=%MAVEN_OPTS_REG%    To debug do this: run maveny.bat run mvn install attach your debugger to port 9999 (set breakpoints of course) When maven gets to the unit test phase it will hit your breakpoint and wait for you. When done debugging simply run mavenn.bat Notes If it takes a while to do the build then you don't really need to set the suspend=y flag. If you set the suspend=n flag then you can just leave it -- but only one maven build can run at a time because of the debug port conflict.

    Read the article

  • Integrating a Progress Bar into a Wizard

    - by Geertjan
    Normally, when you create a wizard, as described here, and you have your own iterator, you'll have a class signature like this: public final class MyWizardWizardIterator implements WizardDescriptor.InstantiatingIterator<WizardDescriptor> { Let's now imagine that you've got some kind of long running process your wizard needs to perform. Maybe the wizard needs to connect to something, which could take some time. Start by adding a new dependency on the Progress API, which gives you the classes that access the NetBeans Platform's progress functionality. Now all we need to do is change the class signature very slightly: public final class MyWizardWizardIterator implements WizardDescriptor.ProgressInstantiatingIterator<WizardDescriptor> { Take a look at the part of the signature above that is highlighted. I.e., use WizardDescriptor.ProgressInstantiatingIterator instead of WizardDescriptor.InstantiatingIterator. Now you will need to implement a new instantiate method, one that receives a ProgressHandle. The other instantiate method, i.e., the one that already existed, should never be accessed anymore, and so you can add an assert to that effect: @Override public Set<?> instantiate() throws IOException {     throw new AssertionError("instantiate(ProgressHandle) " //NOI18N             + "should have been called"); //NOI18N } @Override public Set instantiate(ProgressHandle ph) throws IOException {     return Collections.emptySet(); } OK. Let's now add some code to make our progress bar work: @Override public Set instantiate(ProgressHandle ph) throws IOException {     ph.start();     ph.progress("Processing...");     try {         //Simulate some long process:         Thread.sleep(2500);     } catch (InterruptedException ex) {         Exceptions.printStackTrace(ex);     }     ph.finish();     return Collections.emptySet(); } And, maybe even more impressive, you can also do this: @Override public Set instantiate(ProgressHandle ph) throws IOException {     ph.start(1000);     ph.progress("Processing...");     try {         //Simulate some long process:         ph.progress("1/4 complete...", 250);         Thread.sleep(2500);         ph.progress("1/2 complete...", 500);         Thread.sleep(5000);         ph.progress("3/4 complete...", 750);         Thread.sleep(7500);         ph.progress("Complete...", 1000);         Thread.sleep(1000);     } catch (InterruptedException ex) {         Exceptions.printStackTrace(ex);     }     ph.finish();     return Collections.emptySet(); } The screenshots above show you what you should see when the Finish button is clicked in each case.

    Read the article

  • APEX-Berichte automatisch aktualisieren

    - by carstenczarski
    Einen Bericht auf einer Anwendungsseite in regelmäßigen Abständen zu aktualisieren, ist recht einfach: Seit APEX 4.0 muss man noch nicht einmal JavaScript-Code dafür programmieren; mit einem einfach zu nutzenden Plugin des APEX-Entwicklerteams setzt man das in kürzester Zeit um. In diesem Tipp gehen wir noch etwas weiter: Für eine Tabelle, die eine Spalte mit dem Zeitpunkt der letzten Änderung enthält, wollen wir die zuletzt geänderten Werte hervorheben, so dass man sie leichter erkennen kann.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups

    - by The Old Toxophilist
    By default running your Exalogic in a Virtual provides you with, what to Cloud Users, is a single large resource and they can just create vServers and not care about how they are laid down on the the underlying infrastructure. All the Cloud Users will know is that they can create vServers. For example if we have a Quarter Rack (8 Nodes) and our Cloud User creates 8 vServers those 8 vServers may run on 8 distinct nodes or may all run on the same node. Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers there are cases where it is extremely important that vServers run on distinct physical compute nodes. For example if we have a Weblogic Cluster we will want the Servers with in the cluster to run on distinct physical node to cover for the situation where one physical node is lost. To achieve this the Exalogic Virtualised implementation provides Distribution Groups that define and anti-aliasing policy that the underlying Virtualisation Algorithm will take into account when placing vServers. It should be noted that Distribution Groups must be created before you create vServers because a vServer can only be added to a Distribution Group at creation time. Creating A Distribution Group To create a Distribution Groups we will first need to select the Account in which we want the Distribution Group to be created. Once we have selected the account we will see the Interface update and Account specific Actions will be displayed within the Action Panes. From the Action pane (or Right-Click on the Account) select the "Create Distribution Group" action. This will initiate the create wizard as follows. Distribution Group Details Within the first Step of the Wizard we can specify the name of the distribution group and this should be unique. In addition we can provide a detailed description of the group. Distribution Group Configuration The second step of the configuration wizard allows you to specify the number of elements that are required within this group and will specify a maximum of the number of nodes within you Exalogic. At this point it is always better to specify a group with spare capacity allowing for future expansion. As vServers are added to group the available slots decrease. Summary Finally the last step of the wizard display a summary of the information entered.

    Read the article

  • Sortable & Filterable PrimeFaces DataTable

    - by Geertjan
    <h:form> <p:dataTable value="#{resultManagedBean.customers}" var="customer"> <p:column id="nameHeader" filterBy="#{customer.name}" sortBy="#{customer.name}"> <f:facet name="header"> <h:outputText value="Name" /> </f:facet> <h:outputText value="#{customer.name}" /> </p:column> <p:column id="cityHeader" filterBy="#{customer.city}" sortBy="#{customer.city}"> <f:facet name="header"> <h:outputText value="City" /> </f:facet> <h:outputText value="#{customer.city}" /> </p:column> </p:dataTable> </h:form> That gives me this: And here's the filter in action: Behind this, I have: import com.mycompany.mavenproject3.entities.Customer; import java.io.Serializable; import java.util.List; import javax.annotation.PostConstruct; import javax.ejb.EJB; import javax.faces.bean.RequestScoped; import javax.inject.Named; @Named(value = "resultManagedBean") @RequestScoped public class ResultManagedBean implements Serializable { @EJB private CustomerSessionBean customerSessionBean; public ResultManagedBean() { } private List<Customer> customers; @PostConstruct public void init(){ customers = customerSessionBean.getCustomers(); } public List<Customer> getCustomers() { return customers; } public void setCustomers(List<Customer> customers) { this.customers = customers; } } And the above refers to the EJB below, which is a standard EJB that I create in all my Java EE 6 demos: import com.mycompany.mavenproject3.entities.Customer; import java.io.Serializable; import java.util.List; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @Stateless public class CustomerSessionBean implements Serializable{ @PersistenceContext EntityManager em; public List getCustomers() { return em.createNamedQuery("Customer.findAll").getResultList(); } } Only problem is that the columns are only sortable after the first time I use the filter.

    Read the article

  • Pragmas and exceptions

    - by Darryl Gove
    The compiler pragmas: #pragma no_side_effect(routinename) #pragma does_not_write_global_data(routinename) #pragma does_not_read_global_data(routinename) are used to tell the compiler more about the routine being called, and enable it to do a better job of optimising around the routine. If a routine does not read global data, then global data does not need to be stored to memory before the call to the routine. If the routine does not write global data, then global data does not need to be reloaded after the call. The no side effect directive indicates that the routine does no I/O, does not read or write global data, and the result only depends on the input. However, these pragmas should not be used on routines that throw exceptions. The following example indicates the problem: #include <iostream extern "C" { int exceptional(int); #pragma no_side_effect(exceptional) } int exceptional(int a) { if (a==7) { throw 7; } else { return a+1; } } int a; int c=0; class myclass { public: int routine(); }; int myclass::routine() { for(a=0; a<1000; a++) { c=exceptional(c); } return 0; } int main() { myclass f; try { f.routine(); } catch(...) { std::cout << "Something happened" << a << c << std::endl; } } The routine "exceptional" is declared as having no side effects, however it can throw an exception. The no side effects directive enables the compiler to avoid storing global data back to memory, and retrieving it after the function call, so the loop containing the call to exceptional is quite tight: $ CC -O -S test.cpp ... .L77000061: /* 0x0014 38 */ call exceptional ! params = %o0 ! Result = %o0 /* 0x0018 36 */ add %i1,1,%i1 /* 0x001c */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000061 /* 0x0024 */ nop However, when the program is run the result is incorrect: $ CC -O t.cpp $ ./a.out Something happend00 If the code had worked correctly, the output would have been "Something happened77" - the exception occurs on the seventh iteration. Yet, the current code produces a message that uses the original values for the variables 'a' and 'c'. The problem is that the exception handler reads global data, and due to the no side effects directive the compiler has not updated the global data before the function call. So these pragmas should not be used on routines that have the potential to throw exceptions.

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

  • Sources of NetBeans Gradle Plugin

    - by Geertjan
    Here is where you can find the sources of the latest and greatest NetBeans Gradle plugin: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.1/misc/GradleSupport To use it, download the sources above, open the sources into the IDE (which must be 7.1.1 or above), then you'll have a NetBeans module. Right-click it to run the module into a new instance of NetBeans IDE. In the Options window's Miscellaneous tab, there's a Gradle subtab for setting the Gradle location. In the New File dialog, in the Other category, you'll find a template named "Empty Gradle file". Make sure to name it "build" and to put it in the root directory of the application (by leaving the Folder field empty, you're specifying it should be created in the root directory). You'll then be able to expand the build.gradle file: Double-click a task to run it. When you open the file, it opens in the Groovy editor, if the Groovy editor is installed. When you make changes in the file, the list of tasks, shown above, is automatically recreated. It's at a really early stage of development and it would be great if developers out there would be interested in adding more features to it.

    Read the article

  • Transparency call for Spec Leads and EC materials posted

    - by heathervc
    The materials and recording from the February 2012 call for JCP program Spec Leads is now available.  This call features Martijn Verburg, alternate EC representative for the London Java Community and includes information on the Adopt-a-JSR program.  The materials and audio recording of the  "Leveraging the Community" call can be found on the multimedia page of jcp.org .  The EC meeting summaries from February and March 2012 have also been posted.  Following the April 2012 EC Meeting this morning (minutes and materials will be posted soon), there are now four EC Members that have lost their voting privileges--AT&T, SK Telecom, Samsung and Twitter.  In order to regain their privileges, these EC Members must attend two EC meeting in a row, as detailed in the EC Standing Rules.

    Read the article

  • GNOME 3.4 released, with smooth & fast magnification

    - by Peter Korn
    The GNOME community released GNOME 3.4 today. This release contains several new accessibility features, along with a new set of custom high-contrast icons which improve the user experience for users needing improved contrast. This release also makes available the AEGIS-funded GNOME Shell Magnifier. This magnifier leverages the powerful graphics functionality built into all modern video cards for smooth and fast magnification in GNOME. You can watch a video of that magnifier (with the previous version of the preference dialog), which shows all of the features now available in GNOME 3.4. This includes full/partial screen magnification, a magnifier lens, full or partial mouse cross hairs with translucency, and several mouse tracking modes. Future improvements planned for GNOME 3.6 include focus & caret tracking, and a variety of color/contrast controls.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >