Search Results

Search found 21184 results on 848 pages for 'oracle vm templates'.

Page 597/848 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • Java Spotlight Episode 111: Bruno Souza @brjavaman and Fabiane Nardon @fabianenardonon StoryTroop @storytroop

    - by Roger Brinkley
    Interview with Bruno Souza and Fabiane Nardon on StoryTroop. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News End of Puplic Updates for JDK 6 Bean Valdiation 1.1 public review approved Two key JSRs accepted in time for JavaEE7 Public_JCP EC_meeting_audio_and materials posted Devoxx UK and Devoxx France CFP open JPA 2.1 Schema Generation WebSocket, Java EE 7, and GlassFish Events Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India JCP Spec Lead Call December on Developing a TCK JCP EC Face to Face Meeting, January 15-16, West Coast USA Feature InterviewBruno Souza is a Java Developer and Open Source Evangelist at Summa Technologies, and a Cloud Expert at ToolsCloud. Nurturing developer communities is a personal passion, and Bruno worked actively with Java, NetBeans, Open Solaris, OFBiz, and many other open source communities. As founder and coordinator of SouJava (The Java Users Society), one of the world's largest Java User Groups, Bruno leaded the expansion of the Java movement in Brazil. Founder of the Worldwide Java User Groups Community, Bruno helped the creation and organization of hundreds of JUGs worldwide. A Java Developer since the early days, Bruno participated in some of the largest Java projects in Brazil.Fabiane Nardon is a computer scientist who is passionate about creating software that will positively change the world we live in. She was the architect of the Brazilian Healthcare Information System, considered the largest JavaEE application in the world and winner of the 2005 Duke's Choice Award. She leaded several communities, including the JavaTools Community at java.net, where 800+ open source projects were born. She is a frequent speaker at conferences in Brazil and abroad, including JavaOne, OSCON, Jfokus, JustJava and more. She’s also the author of several technical articles and member of the program committee of several conferences as JavaOne, OSCON, TDC. She was chosen a Java Champion by Sun Microsystems as a recognition of her contribution to the Java ecosystem. Currently, she works as a tools expert at ToolsCloud and in companies she co-founded, where she is helping to shape new disruptive Internet based services.StoryTroop is a space where we combine multiple perspectives about a story. This creates an understanding of that story like never seen before. Pieces of a story are organized in time and space and anyone can add a different perspective.What’s Cool Geek Bike Ride at JavaOne LAD Devoxx UK (Mar 26, 27) and FR (Mar 27 - 29) CFP jFokus schedule is firming up Nashorn Blog 1,500 @JavaSpotlight Twitter followers

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • libjpcap.so on Ubuntu for AT&T ARO

    - by Geertjan
    I now have AT&T ARO also running on Ubuntu, in addition to the Windows scenario I blogged about earlier: I managed to get it up and running thanks again to Doug Sillars, who pointed me here: http://developer.att.com/developer/forward.jsp?passedItemId=14100207&passedItemId=14100207 My plan is to make a screencast soon on HOW to port something like ARO, i.e., as an example of how to do something similar yourself, to a plugin for NetBeans IDE, as a follow up to Five Simple Ways to Extend NetBeans IDE. Thanks again, Doug.

    Read the article

  • Neighboring Siblings?

    - by Ramkumar Menon
    Found an Interesting observation on C.M.Spielberg McQueen’s Blog – XPath 1.0 describes, amongst other axes, ones that allow access to immediate parent and immediate child nodes, as well as access to ancestor and descendant node-sets, but does not provide for immediate siblings – The only way to access these are via predicates – preceding-sibling::*[1] or following-sibling::*[1], and not explicit next-sibling and a previous-sibling axes.

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • QCon SF 2011

    - by user12607987
    To San Francisco for QCon SF 2011, where I spoke on Java SE: Where We've Been, Where We're Going. QCon is much further "up the stack" than JavaOne, so has far fewer talks about the "foundation", Java SE. I thought it was important to review the features delivered in Java SE 7 before discussing what's planned for Java SE 8. This worked out well, as most of the audience were using Java SE 6. The language changes in SE 7 look small, but examining merely two of them - precise rethrow and suppressed exceptions - reveals a new exception handling idiom applicable to many thousands of Java classes. And thumbs up to the QCon organizers for the instant feedback mechanism!

    Read the article

  • WebSocket Applications using Java: JSR 356 Early Draft Now Available (TOTD #183)

    - by arungupta
    WebSocket provide a full-duplex and bi-directional communication protocol over a single TCP connection. JSR 356 is defining a standard API for creating WebSocket applications in the Java EE 7 Platform. This Tip Of The Day (TOTD) will provide an introduction to WebSocket and how the JSR is evolving to support the programming model. First, a little primer on WebSocket! WebSocket is a combination of IETF RFC 6455 Protocol and W3C JavaScript API (still a Candidate Recommendation). The protocol defines an opening handshake and data transfer. The API enables Web pages to use the WebSocket protocol for two-way communication with the remote host. Unlike HTTP, there is no need to create a new TCP connection and send a chock-full of headers for every message exchange between client and server. The WebSocket protocol defines basic message framing, layered over TCP. Once the initial handshake happens using HTTP Upgrade, the client and server can send messages to each other, independent from the other. There are no pre-defined message exchange patterns of request/response or one-way between client and and server. These need to be explicitly defined over the basic protocol. The communication between client and server is pretty symmetric but there are two differences: A client initiates a connection to a server that is listening for a WebSocket request. A client connects to one server using a URI. A server may listen to requests from multiple clients on the same URI. Other than these two difference, the client and server behave symmetrically after the opening handshake. In that sense, they are considered as "peers". After a successful handshake, clients and servers transfer data back and forth in conceptual units referred as "messages". On the wire, a message is composed of one or more frames. Application frames carry payload intended for the application and can be text or binary data. Control frames carry data intended for protocol-level signaling. Now lets talk about the JSR! The Java API for WebSocket is worked upon as JSR 356 in the Java Community Process. This will define a standard API for building WebSocket applications. This JSR will provide support for: Creating WebSocket Java components to handle bi-directional WebSocket conversations Initiating and intercepting WebSocket events Creation and consumption of WebSocket text and binary messages The ability to define WebSocket protocols and content models for an application Configuration and management of WebSocket sessions, like timeouts, retries, cookies, connection pooling Specification of how WebSocket application will work within the Java EE security model Tyrus is the Reference Implementation for JSR 356 and is already integrated in GlassFish 4.0 Promoted Builds. And finally some code! The API allows to create WebSocket endpoints using annotations and interface. This TOTD will show a simple sample using annotations. A subsequent blog will show more advanced samples. A POJO can be converted to a WebSocket endpoint by specifying @WebSocketEndpoint and @WebSocketMessage. @WebSocketEndpoint(path="/hello")public class HelloBean {     @WebSocketMessage    public String sayHello(String name) {         return "Hello " + name + "!";     }} @WebSocketEndpoint marks this class as a WebSocket endpoint listening at URI defined by the path attribute. The @WebSocketMessage identifies the method that will receive the incoming WebSocket message. This first method parameter is injected with payload of the incoming message. In this case it is assumed that the payload is text-based. It can also be of the type byte[] in case the payload is binary. A custom object may be specified if decoders attribute is specified in the @WebSocketEndpoint. This attribute will provide a list of classes that define how a custom object can be decoded. This method can also take an optional Session parameter. This is injected by the runtime and capture a conversation between two endpoints. The return type of the method can be String, byte[] or a custom object. The encoders attribute on @WebSocketEndpoint need to define how a custom object can be encoded. The client side is an index.jsp with embedded JavaScript. The JSP body looks like: <div style="text-align: center;"> <form action="">     <input onclick="say_hello()" value="Say Hello" type="button">         <input id="nameField" name="name" value="WebSocket" type="text"><br>    </form> </div> <div id="output"></div> The code is relatively straight forward. It has an HTML form with a button that invokes say_hello() method and a text field named nameField. A div placeholder is available for displaying the output. Now, lets take a look at some JavaScript code: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/HelloWebSocket/hello";     var websocket = new WebSocket(wsUri);     websocket.onopen = function(evt) { onOpen(evt) };     websocket.onmessage = function(evt) { onMessage(evt) };     websocket.onerror = function(evt) { onError(evt) };     function init() {         output = document.getElementById("output");     }     function say_hello() {      websocket.send(nameField.value);         writeToScreen("SENT: " + nameField.value);     } This application is deployed as "HelloWebSocket.war" (download here) on GlassFish 4.0 promoted build 57. So the WebSocket endpoint is listening at "ws://localhost:8080/HelloWebSocket/hello". A new WebSocket connection is initiated by specifying the URI to connect to. The JavaScript API defines callback methods that are invoked when the connection is opened (onOpen), closed (onClose), error received (onError), or a message from the endpoint is received (onMessage). The client API has several send methods that transmit data over the connection. This particular script sends text data in the say_hello method using nameField's value from the HTML shown earlier. Each click on the button sends the textbox content to the endpoint over a WebSocket connection and receives a response based upon implementation in the sayHello method shown above. How to test this out ? Download the entire source project here or just the WAR file. Download GlassFish4.0 build 57 or later and unzip. Start GlassFish as "asadmin start-domain". Deploy the WAR file as "asadmin deploy HelloWebSocket.war". Access the application at http://localhost:8080/HelloWebSocket/index.jsp. After clicking on "Say Hello" button, the output would look like: Here are some references for you: WebSocket - Protocol and JavaScript API JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API Capturing WebSocket on-the-wire messages

    Read the article

  • Protecting PDF files and XDO.CFG

    - by Greg Kelly
    Protecting PDF files and XDO.CFG Security related properties can be overridden at runtime through PeopleCode as all other XMLP properties using the SetRuntimeProperties() method on the ReportDefn class. This is documented in PeopleBooks. Basically this method need to be called right before calling the processReport() method: . . &asPropName = CreateArrayRept("", 0); &asPropValue = CreateArrayRept("", 0); &asPropName.Push("pdf-open-password"); &asPropValue.Push("test"); &oRptDefn.SetRuntimeProperties(&asPropName, &asPropValue); &oRptDefn.ProcessReport(&sTemplateId, %Language_User, &dAsOfDate, &sOutputFormat); Of course users should not hardcode the password value in the code, instead, if password is stored encrypted in the database or somewhere else, they can use Decrypt() api

    Read the article

  • User Experience Guidance for Developers: Anti-Patterns

    - by ultan o'broin
    Picked this up from a recent Dublin Google Technology User Group meeting: Android App Mistakes: Avoiding the Anti-Patterns by Mark Murphy, CommonsWare Interesting approach of "anti-patterns" aimed at mobile developers (in this case Android), looking at the best way to use code and what's in the SDK while combining it with UX guidance (the premise being the developer does the lot). Interestingly, the idea came through that developers need to stop trying to make one O/S behave like another--on UX grounds. Also, pretty clear that a web-based paradigm is being promoting for Android (translators tell me that translating an Android app reminded them of translating web pages too). Haven't see the "anti"-approach before, developer cookbooks and design patterns sure. Check out the slideshare presentation.

    Read the article

  • SOA Suite 11g Releases

    - by antony.reynolds
    A few years ago Mars renamed one of the most popular chocolate bars in England from Marathon to Snickers.  Even today there are still some people confused by the name change and refer to them as marathons. Well last week we released SOA Suite 11.1.1.3 and BPM Suite 11.1.1.3 as well as OSB 11.1.1.3.  Seems that some people are a little confused by the naming and how to install these new versions, probably the same Brits who call Snickers a Marathon :-).  Seems that calling all the revisions 11g Release 1 has caused confusion.  To help these people I have created a little diagram to show how you can get the latest version onto your machine.  The dotted lines indicate dependencies. Note that SOA Suite 11.1.1.3 and BPM 11.1.1.3 are provided as a patch that is applied to SOA Suite 11.1.1.2.  For a new install there is no need to run the 11.1.1.2 RCU, you can run the 11.1.1.3 RCU directly. All SOA & BPM Suite 11g installations are built on a WebLogic Server base.  The WebLogic 11g Release 1 version is 10.3 with an additional number indicating the revision.  Similarly the 11g Release 1 SOA Suite, Service Bus and BPM Suite have a version 11.1.1 with an additional number indicating the revision.  The final revision number should match the final revision in the WebLogic Server version.  The products are also sometimes identified by a Patch Set number, indicating whether this is the 11gR1 product with the first or second patch set.  The table below show the different revisions with their alias. Product Version Base WebLogic Alias SOA Suite 11gR1 11.1.1.1 10.3.1 Release 1 or R1 SOA Suite 11gR1 11.1.1.2 10.3.2 Patch Set 1 or PS1 SOA Suite 11gR1 11.1.1.3 10.3.3 Patch Set 2 or PS2 BPM Suite 11gR1 11.1.1.3 10.3.3 Release 1 or R1 OSB 11gR1 11.1.1.3 10.3.3 Release 1 or R1 Hope this helps some people, if you find it useful you could always send me a Marathon bar, sorry Snickers!

    Read the article

  • Games Localization: Cultural Points

    - by ultan o'broin
    Great article about localization considerations, this times in the games space. Well worth checking out. It's rare to see such all-encompassing articles about localization considerations aimed at designers. That's a shame. The industry assumes all these things are known. The evidence from practice is that they're not and also need constant reinforcement. We're not in the games space in enterprise applications yet. However, there may be a role for them in the training space but also in CRM, building relationships and contacts. Beyond the obvious considerations, check out the cultural aspects of games localization too. For example, Zygna's offerings, which you might have played on Facebook: Zynga, which can lay claim to the two most popular social games on Facebook - FarmVille and CityVille - has recently localized both games for international audiences, and while CityVille has seen only localization for European languages, FarmVille has been localized for China, which involved rebuilding the game from the ground up. This localization process involved taking into account cultural considerations including changing the color palette to be brighter and increasing the size of the farm plots, to appeal to Chinese aesthetics and cultural experience. All the more reason to conduct research in your target markets, worldwide.

    Read the article

  • At Collaborate 10 Next Week

    - by shay.shmeltzer
    I'm going to be at the Collaborate 10 conference next week doing a couple of sessions and hanging out in the JDeveloper booth at the demoground. My sessions are on Monday morning back to back: Developing Cutting Edge Web UI for Enterprise Applications - The Easy Way Monday, April 19 10:45 am - 11:45 am 401 The Fusion Development Experience Monday, April 19 12:00 pm - 12:30 pm 404 The first session will also be available for those watching the conference over the Web. If you want to see how Fusion applications are being built, and how you can use the same approach to do custom development for your applications, or create rich UIs for your applications then these would be good sessions to see. I'll also be doing shifts in the demo ground in a JDeveloper/ADF booth - so if you have any questions, complaints, or suggestions - or if you just want to understand what is this thing good for - come on over and we'll talk.

    Read the article

  • Java Script Support In ADF

    - by Vijay Mohan
    1. if you want the java script code in jspx.   - <af:resource> tag available under adf faces ui component has the best supportability for java script. If you want to invoke the js function on some adf ui component then you will have to include a client listener tag with js function name and the event type.If you want it to happen on a non adf html based compoent then you can have an anchor tag  specified with the javascript tag , event type and js function name (with parameters - if any) and as soon as the specifed action happens on that component then the js function would be invoked.2.if you want it in adf page fragment (jsff)  - jsff supports java script wrapped under <trh:script> </trh:script> tag. Rest of the things follow the same way as that of jspx.

    Read the article

  • Morgan Stanley chooses Solaris 11 to run cloud file services

    - by Frederic Pariente
    At the EAKC2012 Conference last week in Edinburg, Robert Milkowski, Unix engineer at Morgan Stanley, presented on deploying OpenAFS on Solaris 11. It makes a great proofpoint on how ZFS and DTrace gives a definite advantage to Solaris over Linux to run AFS distributed file system services, the "cloud file system" as it calls it in his blog. Mike used ZFS to achieve a 2-3x compression ratio on data and greatly lower the TCA and TCO of the storage subsystem, and DTrace to root-cause scalability bottlenecks and improve performance. As future ideas, Mike is looking at leveraging more Solaris features like Zones, ZFS Dedup, SSD for ZFS, etc.

    Read the article

  • Copy Formatting in Word

    - by Ahamad Patan
    Many a times you may need to copy the "Format" in Word. The "Copy Format" feature lets you quickly and easily "copy" all the formatting characteristics from one group of selected text to another. This is helpful when you have several headings that you want consistent formatting. Here are steps on how to Copy Formatting: 1. Select, or highlight, the item of text containing the format you wish to copy. 2. Office 2003 - Click on the Format Painter Button in the Standard Toolbar (looks like Paintbrush). Office 2007 - Format Painter Button is located on the Home tab (looks like a Paintbrush). Office 2003 - An I-beam with a small cross to the left will appear as you move your mouse. Office 2007 - An I-beam with a small paintbrush will appear as you move your mouse. 3. Select the text you wish to copy the formatting to. 4. Formatting of the selected text will automatically change. For multiple formatting changes, double-click on the Format Painter button in Step 2. Remember, you'll have to click it again to deselect it or press Esc.

    Read the article

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • My integer overfloweth

    - by darcy
    While certain classes like java.lang.Integer and java.lang.Math have been in the platform since the beginning, that doesn't mean there aren't more enhancements to be made in such places! For example, earlier in JDK 8, library support was added for unsigned integer arithmetic. More recently, my colleague Roger Riggs pushed a changeset to support integer overflow, that is, to provide methods which throw an ArithmeticException on overflow instead of returning a wrapped result. Besides being helpful for various programming tasks in Java, methods like the those for integer overflow can be used to implement runtimes supporting other languages, as has been requested at a past JVM language summit. This year's language summit is coming up in July and I hope to get some additional suggestions there for helpful library additions as part of the general discussions of the JVM and Java libraries as a platform.

    Read the article

  • Enabling SSL Requests on Jdev's Integrated Weblogic

    - by Christian David Straub
    Often times you will want to enable SSL access for such things as secure login or secure signup. By default, the integrated WLS that ships with JDev does not listen to SSL requests. However, this is easily fixed.Just navigate to http://127.0.0.1:7101/console. This will deploy the console app where you can configure WLS. By default the login credentials are:username: weblogicpassword: weblogic1Then go to Environment -> Servers -> DefaultServer. Check the "SSL Listen Port Enabled" box and your server will now listen to SSL requests (just make sure to use the listen port that is specified).For added security, you can always check while processing your request that it is going through an SSL connection by first checking HttpServletRequest.isSecure().

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • The Social Business Thought Leaders - Esteban Kolsky

    - by kellsey.ruppel
    Esteban Kolsky's presentation at the Social Business Forum 2012 was meaningfully titled “Everything you wanted to know about Customer Service using Social but had no one to ask”.  A recent survey by ThinkJar, Kolsky’s independent analyst firm, reported how more than 90% of the interviewed companies consider embracing social channels in customer service the right thing to do for the business and its customers. These numbers shouldn't be too surprising given the popularity of services such as Twitter and Facebook (59% and 60% respectively in the survey) among organizations, the power consumers are gaining online and the 40% preference they have to escalate issues on social services. Moreover, both large enterprises and small businesses are realizing how customer retention is cheaper and easier than customer acquisition. Many companies are looking at communities and social networks as an opportunity to drive loyalty, satisfaction and word of mouth. However, in this early phase the way they are preparing to launch social support appears to be lacking at best: 66% have no defined processes for customer service over social channels 68% were not able to estimate ROI before deploying social in customer service Only 8% found the expected ROI Most of the projects are stuck in the pilot or testing phase In his interview for the Social Business Thought-Leaders, Esteban discusses how to turn social media hype in business gains by touching upon some of the hottest topics organizations face when approaching social support: How to go from social media monitoring to actionable insights How Social CRM should be best positioned in regard to traditional CRM The importance of integrating social data to transactional data  Conversations with customer service organizations points to 2012 as the year of "understanding what social means for supporting customers". Will 2013 be the year it all becomes reality? We invite you to listen to Esteban Kolsky's interview to understand how to most effectively develop cross-channel strategies that include social channels and improve both customer satisfaction and the overall customer experience.

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >