Search Results

Search found 25919 results on 1037 pages for 'oracle knowledge base'.

Page 491/1037 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Two Free Training Webcasts Open for Registration

    - by KKline
    We've got two sessions that you need to sign up for right away. The upcoming webcast for Oracle-oriented folks has huge registration numbers. So get in while you still can before we hit the limit of what LiveMeeting can handle. Pain of the Week: SQL Server for the Oracle DBA Webcast: SQL Server for the Oracle DBA Date: Thursday, May 27, 2010 (Just a couple days hence!) Time: 8 a.m. Pacific / 11 a.m. Eastern / 4 p.m. United Kingdom / 5 p.m. Central Europe Duration: 45-60 minutes Cost: FREE In enterprise...(read more)

    Read the article

  • Interactive Master Detail Report Just A Few Minutes Away!

    - by kanichiro.nishida
    Oracle BI Publisher 11G have not just made Master Detail report development much easier and quicker, but also made it more interactive and fun without any coding or scripting. I’ve just created a short video that shows how to create such Master Detail report within a few minutes, so please take a look if you’re interested in!     With 11G, now you can create such report only with your browser very quickly and your report audience will be not only able to interact with the report but also able to view it in a pixel-perfect way with many different formats such as PDF, Excel, Word, PPT, etc. Happy Master Detail Reports development and design! Please share any feedback you have with Interactive Viewer and Layout Editor with us!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • CMS DITA North America Conference / Agile Doc

    - by ultan o'broin
    I attended and presented, along with a colleague, at the Content Management Strategies DITA North America Conference 2010 in Santa Clara this week. It was touch and go whether I would make it across the Atlantic, but as usual the Irish always got through! Our presentation was about DITA and Writing Patterns, and there was three other presentations from Oracle folks too, all very well delivered and received. The interaction with other companies was superb, and the sparks of innovation that flew as a result left me with three use case ideas for UX investigation and implementation. My colleague had a similar experience. Well worth attending! One of the last sessions was about Authoring in an Agile environment, presented by Julio Vasquez. This was an excellent, common sense, and forthright no-nonsense delivery that made complete sense to me. I'd encourage you, if you are interested in the subject, to check out Julio's white paper on the subject too, available from the SDI website.

    Read the article

  • Use WLST to Delete All JMS Messages From a Destination

    - by james.bayer
    I got a question today about whether WebLogic Server has any tools to delete all messages from a JMS Queue.  It just so happens that the WLS Console has this capability already.  It’s available on the screen after the “Show Messages” button is clicked on a destination’s Monitoring tab as seen in the screen shot below. The console is great for something ad-hoc, but what if I want to automate this?  Well it just so happens that the console is just a weblogic application layered on top of the JMX Management interface.  If you look at the MBean Reference, you’ll find a JMSDestinationRuntimeMBean that includes the operation deleteMessages that takes a JMS Message Selector as an argument.  If you pass an empty string, that is essentially a wild card that matches all messages. Coding a stand-alone JMX client for this is kind of lame, so let’s do something more suitable to scripting.  In addition to the console, WebLogic Scripting Tool (WLST) based on Jython is another way to browse and invoke MBeans, so an equivalent interactive shell session to delete messages from a destination would looks like this: D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain\bin>setDomainEnv.cmd D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain>java weblogic.WLST   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   wls:/offline> connect('weblogic','welcome1','t3://localhost:7001') Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'hotspot_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   wls:/hotspot_domain/serverConfig> serverRuntime() Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   wls:/hotspot_domain/serverRuntime> cd('JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0') wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> ls() dr-- DurableSubscribers   -r-- BytesCurrentCount 0 -r-- BytesHighCount 174620 -r-- BytesPendingCount 0 -r-- BytesReceivedCount 253548 -r-- BytesThresholdTime 0 -r-- ConsumersCurrentCount 0 -r-- ConsumersHighCount 0 -r-- ConsumersTotalCount 0 -r-- ConsumptionPaused false -r-- ConsumptionPausedState Consumption-Enabled -r-- DestinationInfo javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=DestinationInfo,items=((itemName=ApplicationName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ModuleName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName openmbean.SimpleType(name=java.lang.Boolean)),(itemName=SerializedDestination,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ServerName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=Topic,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=VersionNumber,itemType=javax.management.op ule-0!Queue-0, Queue=true, SerializedDestination=rO0ABXNyACN3ZWJsb2dpYy5qbXMuY29tbW9uLkRlc3RpbmF0aW9uSW1wbFSmyJ1qZfv8DAAAeHB3kLZBABZTeXN0ZW1Nb2R1bGUtMCFRdWV1ZS0wAAtKTVNTZXJ2ZXItMAAOU3lzdGVtTW9kdWxlLTABAANBbGwCAlb6IS6T5qL/AAAACgEAC0FkbWluU2VydmVyAC2EGgJW+iEuk+ai/wAAAAsBAAtBZG1pblNlcnZlcgAthBoAAQAQX1dMU19BZG1pblNlcnZlcng=, ServerName=JMSServer-0, Topic=false, VersionNumber=1}) -r-- DestinationType Queue -r-- DurableSubscribers null -r-- InsertionPaused false -r-- InsertionPausedState Insertion-Enabled -r-- MessagesCurrentCount 0 -r-- MessagesDeletedCurrentCount 3 -r-- MessagesHighCount 2 -r-- MessagesMovedCurrentCount 0 -r-- MessagesPendingCount 0 -r-- MessagesReceivedCount 3 -r-- MessagesThresholdTime 0 -r-- Name SystemModule-0!Queue-0 -r-- Paused false -r-- ProductionPaused false -r-- ProductionPausedState Production-Enabled -r-- State advertised_in_cluster_jndi -r-- Type JMSDestinationRuntime   -r-x closeCursor Void : String(cursorHandle) -r-x deleteMessages Integer : String(selector) -r-x getCursorEndPosition Long : String(cursorHandle) -r-x getCursorSize Long : String(cursorHandle) -r-x getCursorStartPosition Long : String(cursorHandle) -r-x getItems javax.management.openmbean.CompositeData[] : String(cursorHandle),Long(start),Integer(count) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),Long(messageHandle) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),String(messageID) -r-x getMessage javax.management.openmbean.CompositeData : String(messageID) -r-x getMessages String : String(selector),Integer(timeout) -r-x getMessages String : String(selector),Integer(timeout),Integer(state) -r-x getNext javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x getPrevious javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x importMessages Void : javax.management.openmbean.CompositeData[],Boolean(replaceOnly) -r-x moveMessages Integer : String(java.lang.String),javax.management.openmbean.CompositeData,Integer(java.lang.Integer) -r-x moveMessages Integer : String(selector),javax.management.openmbean.CompositeData -r-x pause Void : -r-x pauseConsumption Void : -r-x pauseInsertion Void : -r-x pauseProduction Void : -r-x preDeregister Void : -r-x resume Void : -r-x resumeConsumption Void : -r-x resumeInsertion Void : -r-x resumeProduction Void : -r-x sort Long : String(cursorHandle),Long(start),String[](fields),Boolean[](ascending)   wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> cmo.deleteMessages('') 2 where the domain name is “hotspot_domain”, the JMS Server name is “JMSServer-0”, the Queue name is “Queue-0” and the System Module is named “SystemModule-0”.  To invoke the operation, I use the “cmo” object, which is the “Current Management Object” that represents the currently navigated to MBean.  The 2 indicates that two messages were deleted.  Combining this WLST code with a recent post by my colleague Steve that shows you how to use an encrypted file to store the authentication credentials, you could easily turn this into a secure automated script.  If you need help with that step, a long while back I blogged about some WLST basics.  Happy scripting.

    Read the article

  • Save Points

    - by raghu.yadav
    Explicit save point : Requires an end user action before a bounded or unbounded task flow creates a save point. For example, an end user clicks a button that invokes a method call activity that, in turn, creates a save point Implicit save point : can only originate from a bounded task flow if 1) A session times out due to end user inactivity 2) An end user logs out without saving the data 3) An end user closes the only browser window, thus logging out of the application 4) An end user navigates away from the current application using control flow rules (for example, uses a goLink component to go to an external URL) and having unsaved data. good usecases and examples given by frank/biemond and on implicit save points http://www.oracle.com/technology/products/jdev/tips/fnimphius/cancelForm/cancelForm_wsp.html?_template=/ocom/print http://biemond.blogspot.com/2008/04/automatically-save-transactions-with.html

    Read the article

  • Agent versus Agentless management

    - by Owen Allen
    I got a couple of questions about Agentless asset management: "What does agentless management do for an asset?" Agentless management is one of the two ways that you can manage an operating system. Rather than installing an Agent Controller on the OS, agentless management uses SSH to regularly check the system and gather monitoring data. Many of the actions that would be available on an agent-managed system are available on an agentless system, but actions such as running reports or updating an Oracle Solaris 10 or Linux OS are not available. A table showing the capabilities of agentless management is here. "What permissions does agentless management require?" Agentless management still requires root credentials. If you can't log into the system as root, you can provide one set of credentials for the login, and then a set of root credentials to switch to.

    Read the article

  • CFP for Java Embedded @ JavaOne

    - by Tori Wieldt
    Java Embedded @ JavaOne is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. The call-for-papers (CFP) for Java Embedded @ JavaOne is now open. Interested speakers are invited to make business submissions: best practices, case studies and panel discussions on emerging opportunities in the Java embedded space. Submit a paper. Also, due to high interest, event organizers are also asking for technical submissions for the JavaOne conference, for the "Java ME, Java Card, Embedded and Devices" track (this track ONLY). The timeline for the CFP is the same for both business and technical submissions: CFP Open – June 18th Deadline for submissions – July 18th Notifications (Accepts/Declines) – week of July 29th Deadline for speakers to accept speaker invitation – August 10th Presentations due for review – August 31st Attendees of both JavaOne and Oracle Openworld can attend Java Embedded @ JavaOne by purchasing a $100.00 USD upgrade to their full conference pass. Rates for attending Embedded @ JavaOne alone are here.

    Read the article

  • The Threats are Outside the Risks are Inside

    - by Naresh Persaud
    In the past few years we have seen the threats against the enterprise increase dramatically. The number of attacks originating externally have outpaced the number of attacks driven by insiders. During the CSO Summit at Open World, Sonny Singh examined the phenomenon and shared Oracle's security story. While the threats are largely external, the risks are largely inside. Criminals are going after our sensitive customer data. In some cases the attacks are advanced. In most cases the attacks are very simple. Taking a security inside out approach can provide a cost effective way to secure an organization's most valuable assets. &amp;amp;amp;lt;/span&amp;amp;amp;gt;border-width:1px 1px 0;margin-bottom:5px&amp;amp;amp;quot; allowfullscreen=&amp;amp;amp;quot;&amp;amp;amp;quot;&amp;amp;amp;gt; Cso oow12-summit-sonny-sing hv4 from OracleIDM

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Boston: Free Java Developer Event March 3rd!

    - by Jacob Lehrbaum
    Attention Boston area developers!  Oracle has been running a series of free one-day Java Developer events in the US, Europe, and Asia since last November, and on March 3rd, this highly popular series is coming to the Westin Copley Place in Boston.  The Java Developer Day will include four tracks of sessions and hands-on-labs designed for developers interested in Server, Desktop, Embedded, and core Java SE platform topics.  Technologies covered include Java EE, Java ME and Java SE (including the JDK).  From the event page: Come to this free event if you are interested in:Evaluating the Java platformUsing other languages on the JVMBuilding server side JavaConstructing Rich Web or Desktop ApplicationsUnderstanding the JVM and its built in diagnosticsMaking Smart Devices even smarterCheck out the event page to read more and/or register.  The event is free, but space is limited so register today!

    Read the article

  • Introducing UPK 3.6 Simulation Help (You Say It and We Do It!)

    - by marc.santosusso
    We would like to thank everyone that participated in the recent documentation survey that was conducted over the last several months. Your feedback is valuable and we appreciate the time you took to provide it. Many of you commented that you would like to have "UPKs for UPK" in the documentation. In response, we are pleased to announce the availability of Simulation Help. This unique help system is a blending of the text-based Developer help and a collection of approximately 200 simulations that show authors how to create, record, refine, localize, and publish content using the Developer. You can access Simulation Help at any time using the following link: http://download.oracle.com/technology/products/upk/index.html Save this link as a favorite or bookmark in your browser for easy access anytime. We have also provided a link to a short one-question survey so you can tell us what you think of the new Simulation Help. http://www.surveymonkey.com/s/BJT7LV6 Thanks again for your valuable feedback on the product documentation!

    Read the article

  • JPA and NoSQL using EclipseLink - MongoDB supported

    - by arungupta
    EclipseLink 2.4 has added JPA support for NoSQL databases, MongoDB and Oracle NoSQL are the first ones to make the cut. The support to other NoSQL database can be extended by adding a EclipseLink EISPlatform class and a JCA adapter. A Java class can be mapped to a NoSQL datasource using the @NoSQL annotation or <no-sql> XML element. Even a subset of JPQL and the Criteria API are supported, dependent on the NoSQL database's query support. The connection properties are specified in "persistence.xml". A complete sample showing how JPA annotations are mapping and using @NoSQL is explained here. The MongoDB-version of the source code can also be checked out from the SVN repository. EclipseLink 2.4 is scheduled to be released with Eclipse Juno in June 2012 and the complete set of supported features is described on their wiki. The milestone and nightly builds are already available. Do you want to try with GlassFish and let us know ?

    Read the article

  • Impact of Truncate or Drop Table When Flashback Database is Enabled

    - by alejandro.vargas
    Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover. One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explains, we knew there is an overhead for these operations. But the information on the note was not clear enough, we the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

    Read the article

  • SF Bay Area Event, November 1st: SPARC 25th Anniversary at Computer History Museum

    - by Larry Wake
    For those of you in the Bay Area, there's going to be what promises to be a very interesting event at the Computer History Museum on Thursday, November 1st at 11 AM: "SPARC at 25: Past, Present and Future". The panel event will feature Sun Microsystems founders Bill Joy and Andy Bechtolsheim, SPARC luminaries such as Anant Agrawal and David Patterson, former Sun VP Bernard Lacroute, plus Oracle executives Mark Hurd, John Fowler and Rick Hetherington. For those of you who can't attend, we expect to have video of the event afterward, but if you can make it in person, this is a unique opportunity to hear from industry pioneers, as well as get insights into future SPARC innovations. Plus, you can see SPARC (and non-SPARC) related exhibits from both the Computer History Museum and the personal collections of some of the panel participants. I hope you can join us; Register today.

    Read the article

  • Database Machine gyakorlati tapasztalatok!

    - by Fekete Zoltán
    Ketto héttel ezelott gyakorlati tapasztalatokat szereztünk egy magyarországon muködo vállalat adattárházával egy igazi Database Machine / Exadata környezetben, egy Sun Oracle Database Machine Half Rack kiépítéssel. Az eredmények valóban lenyugözöek. Az Exadata Storage izgalmas és egyedi tulajdonságai: Smart Scan, Smart Flash Cache, Storage Index, tömörítés: Exadata Hybrid Columnar Compression, a hihetetlen mértéku párhuzamosság olyan teljesítmény elonyhöz juttatja a felhasználót, ami szemelkerekedést, ujjongást és hosszantartó belso mosolyt eredményez. Hogy ez hasznos-e az Ön és adatbázis környezete egészségére nézve? :) Kérdezze meg orvosát, gyógyszerészét és a white paper leírásokat, továbbá publikált ügyfél történeteket. A technikai és kereskedelmi részletekrol kérdezzen engem. :) A holnapi (2010. máricius 24. szerda) eloadáson a HOUG Konferencián személyesen is meghallgathatók ezek a gyakorlati tapasztalatok, authentikus forrásból. Hasonlóan szép napokat kívánok mindenkinek!

    Read the article

  • Podcast Show Notes: SOA Made Simple

    - by Bob Rhubart
    My guests for the latest OTN ArchBeat Podcast are Lonneke Dikmans and Ronald van Luttikhuizen, managing partners at Vennster (http://www.vennster.nl/) an  IT consultancy based in the Netherlands. Lonneke and Ronald are Oracle ACE Directors, very active members of the OTN architect community, and they have participated as panelists in previous ArchBeat podcasts. But given their collaboration on an upcoming book on service oriented architecture, I thought it was time to let them have the program to themselves. Listen to Part 1 Listen to Part 2 (Nov 30) Listen to Part 3 (Dec 7) Get Connected Lonneke and Ronald are very active in social media. Strike up your own conversation with them via the following links: Lonneke Dikmans Ronald van Luttikhuizen Coming Soon  A panel discussion with three members of the product team behind the upcoming release of WebLogic Server 12c. Stay tuned: RSS

    Read the article

  • First Blog Entry & OracleWebLogic YouTube Channel

    - by Jeffrey West
    This is my fist blog post ever!  I'll be blogging about WebLogic, Exalogic and other... logics...In the meantime check out our Oracle WebLogic YouTube Channel!  We have 50+ subscribers and growing!  We really want to hear feedback from our WebLogic users so let us know how we are doing.  Leave a comment on our WebLogic channel, comment on one of our videos or comment on our blogs and let us know what you want to see from us!

    Read the article

  • OracleWebLogic YouTube Channel

    - by Jeffrey West
    James Bayer and I have been working on content for an Oracle WebLogic YouTube channel to host demos and overview of WebLogic features.  The goal is to provide short educational overviews and demos of new, useful, or 'hidden gem' WLS features that may be underutilized.  We currently have 26 videos including Advanced JMS features, WLST and JRockit Mission Control.  We also have a few videos about our JRockit Virtual Edition software that is pretty neat. We will be making ongoing updates to the content.  We really do want people to give us feedback on what they want to see with regard to WebLogic.  Whether its how you achieve a certain architectural goal with WLS or a demonstration and sample code for a feature - All requests related to WLS are welcome! You can find the channel here: http://www.YouTube.com/OracleWebLogic.  Please comment on the Channel or our WebLogic Server blog to let us know what you think.  Thanks!

    Read the article

  • Webcast - Social BPM: Integrating Enterprise 2.0 with Business Applications

    - by peggy.chen
    In today's fast-paced marketplace, successful companies rely on agile business processes and collaborative work environments to stay ahead of the competition. By making your application-based business processes visible, shareable, and flexible through dynamic, process-aware user interfaces, you can ensure that your team's best ideas are heard-and implemented quickly. Join us for this complimentary live Webcast and learn how Oracle's business process management (BPM) solution with integrated Enterprise 2.0 capabilities will enable your team to: Embed ad hoc collaboration into your structured processes and gain a unified view of enterprise information-across business functions-for effective and efficient decision-making Reach out to an expanded network for expert input in resolving exceptions in business workflows Add social feedback loops to your enterprise applications and continuously improve business processes Join us for this LIVE Webcast tomorrow as we discuss how business process management with integrated Enterprise 2.0 collaboration improves business responsiveness and enhances overall enterprise productivity. Take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Register for the webcast now!

    Read the article

  • Meet us at Devoxx!

    - by terrencebarr
    It’s Devoxx time again! If you’re at Devoxx, sure to check the schedule for a whole range of exciting Java and Oracle topics: JavaFX, OpenJDK, JDK 7, Java Embedded, Java EE, JCP, NetBeans, Greenfoot, as well as Java Duchess and JUG meetings. Talks, labs, BOFs, demos, and more. Embedded Java will also play a prominent role. Want to see Java on Raspberry Pi in action? Find out why what’s happening with Java in IoT (Internet of Things)? Play with NetBeans and Tinkerforge? Check out the full Devoxx schedule. Why do I think Java has the most exciting part of its future still ahead of it? Catch up with me at my talk on Wed 14:00:  ”Small, Smart, Connected: Java in the Internet of Things”. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: embedded, Embedded Java, Java, Java Embedded, JavaFX, NetBeans, OpenJDK

    Read the article

  • OWB 11gR2 for Windows Standalone Installer Now Available!

    - by antonio romero
    The 11gR2 Windows 32-bit standalone is out: http://www.oracle.com/technology/software/products/warehouse/index.html Tips: You may have to clear your browser cache to get the version of the page with the download link. Windows 7 is not specifically supported at this time. If you are on Windows 7, we have anecdotal accounts of Design Center running quite well in XP Mode.  On other 64-bit Windows platforms, we recommend a virtual machine installation of a certified Windows platform. Come and get it! Join our OWB linkedin group: http://www.linkedin.com/groups?gid=140609

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • Hot: Pre-built Developer VMs from OTN

    - by Justin Kestelyn
    For those of you who haven't already played with it, Oracle VM VirtualBox is an awfully cool desktop virtualization tool. Even better, it's free. We OTN-ers like it even more than you will though because it allows us to freeze-dry entire software stacks into VM images. Developers can simply download a few files, assemble them with a script we provide, and then import and run the resulting pre-built VM in VirtualBox. Voila, instant (insert name here) stack. These VMs are particularly handy in support of in-person workshops, but there's no reason we can't make them available for everyone. Which we have done, in Java, database, and SOA/BPM flavors. (All "ingredients" are listed at the referenced link, and they are extensive.) Now that we have the kinks worked out, other flavors are sure to become available in 2011. Now go get 'em!

    Read the article

  • Webcast Reminder: Implementing IDM in Healthcare, September 19th @10:00 am PST

    - by Darin Pendergraft
    Join me and Rex Thexton from PwC tomorrow (September 19th) as we review an IDM project that Rex and his team completed for a large healthcare organization.  Rex will talk through the IT environment and business drivers that lead to the project, and then we will go through planning, design and implementation of the Oracle Identity Management products that PwC and the customer chose to complete the project. This will be a great opportunity to hear about the trends that are driving IT Healthcare, and to get your Identity Management questions answered. If you haven't already registered - Register Here!

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >