Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 278/811 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • How to avoid big and clumsy UITableViewController on iOS?

    - by Johan Karlsson
    I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many UITableViewController implementations seems to be rather big. Most examples I have seen lets the UITableViewController implement <UITableViewDelegate> and <UITableViewDataSource>. These implementations are a big reason why UITableViewControlleris getting big. One solution would be to create separate classes that implements <UITableViewDelegate> and <UITableViewDataSource>. Of course these classes would have to have a reference to the UITableViewController. Are there any drawbacks using this solution? In general I think you should delegate the functionality to other "Helper" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain too much functionality, nor the view. I believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is: How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case) There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Please provide an argument why your solution is awesome.

    Read the article

  • Insurance Outlook: Just Right of Center

    - by Chuck Johnston Admin
    On Tuesday June 21st, PwC lead a session at the International Insurance Society meeting in Toronto focused on the opportunity in insurance.  The scenarios focusing on globalization, regulation and new areas of insurance opportunity were well defined and thought provoking, but the most interesting part of the session was the audience participation. PwC used a favorite strategic planning tool of mine, scenario planning, to highlight the important financial, political, social and technological dimensions that impact the insurance industry. Using wireless polling keypads, the audience was able to participate in scoring a range of possibilities across each dimension using a 1 to 5 ranking; 1 being generally negative or highly pessimistic scenarios and 5 being very positive or more confident scenarios. The results were then displayed on a screen with a line or "center" in the middle. "Left of center" was defined as being highly cautious and conservative, while "right of center" was defined as a more optimistic outlook for the industry's future. This session was attended by insurance carriers' senior leadership, leading insurance academics, senior regulators, and the occasional insurance technology executive. In general, the average answer fell just right of center, i.e. a little more positive or optimistic than center. Three years ago, after the 2008 financial crisis, I suspect the answers would have skewed more sharply to the left of center. This sense that things are generally getting better for insurers and that there is the potential for positive change pervaded the conference. There is still caution and concern around economic factors, regulation (especially the potential pitfalls of regulatory convergence with banking) and talent management, but in general, the industry outlook is more positive than it's been in several years. Chuck Johnston is vice president of industry strategy, Oracle Insurance. 

    Read the article

  • Oracle SPARC SuperCluster and US DoD Security guidelines

    - by user12611852
    I've worked in the past to help our government customers understand how best to secure Solaris.  For my customer base that means complying with Security Technical Implementation Guides (STIGs) from the Defense Information Systems Agency (DISA).  I recently worked with a team to apply both the Solaris and Oracle 11gR2 database STIGs to a SPARC SuperCluster.  The results have been published in an Oracle White paper. The SPARC SuperCluster is a highly available, high performance platform that incorporates: SPARC T4-4 servers Exadata Storage Servers and software ZFS Storage appliance InfiniBand interconnect Flash Cache  Oracle Solaris 11 Oracle VM for SPARC Oracle Database 11gR2 It is targeted towards large, mission critical database, middleware and general purpose workloads.  Using the Oracle Solution Center we configured a SSC applied DoD security guidance and confirmed functionality and performance of the system.  The white paper reviews our findings and includes a number of security recommendations.  In addition, customers can contact me for the itemized spreadsheets with our detailed STIG reports. Some notes: There is no DISA STIG  documentation for Solaris 11.  Oracle is working to help DISA create one using their new process. As a result, our report follows the Solaris 10 STIG document and applies it to Solaris 11 where applicable. In my conversations over the years with DISA Field Security Office they have repeatedly told me, "The absence of a DISA written STIG should not prevent a product from being used.  Customer may apply vendor or industry security recommendations to receive accreditation." Thanks to the core team: Kevin Rohan, Gary Jensen and Rich Qualls as well as the staff of the Oracle Solution Center and Glenn Brunette for their help in creating the document.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Oracle University Neue Kurse (Week 42)

    - by rituchhibber
    In der letzten Woche wurden von Oracle University folgende neue Kurse (bzw. Versionen davon) veröffentlicht: Database Oracle Enterprise Manager Cloud Control 12c: Install & Upgrade (Training On Demand) MySQL Performance Tuning (Training On Demand) Oracle Database 11g: New Features for Administrators (Training On Demand - In German) Oracle Database 11g: Professioneller Einstieg in SQL (Training On Demand - In German) Fusion Middleware Oracle GoldenGate 11g Management Pack: Overview (1 day - In German) Oracle GoldenGate 11g Fundamentals for Oracle (4 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle WebCenter Portal 11g: Build Portals with Spaces (3 days) Business Intelligence Oracle BI 11g R1: Create Analyses and Dashboards (4 days) SOA & BPM SOA Adoption and Architecture Fundamentals (3 Days) eBusiness Suite R12 Oracle Using and Maintaining Approvals Management - Self-Study Course R12 Oracle HRMS Advanced Benefits Fundamentals - Self-Study Course WebLogic Oracle WebLogic Server 11g: Monitor and Tune Performance (Training On Demand) Financial Oracle Project Financial Planning 11.1.2: Create Projects ( 3 days) Tuxedo Oracle Tuxedo 12c: Application Administration (5 days) Java Java SE 7: The Platform Evolves - Self-Study Course Primevera Primavera Client/Server Partner Trainer Course - Self-Study Course Primavera Progress Reporter 8.2 - Self-Study Course Identity Management Oracle Identity Manager 11g: Essentials (4 days - In German) Wenn Sie weitere Einzelheiten erfahren oder sich über Kurstermine informieren möchten, wenden Sie sich einfach an Ihr lokales Oracle University-Team in.

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

  • Best Persistence choice for J2EE-App with frequently changing Data Model

    - by Ben-G
    Whenever I develop a J2EE-Application, I at some point decide to switch from my dummy Persistence (Simply Using Lists and other Data Structures) to some Sort of Database Persistence. Mostly when I hope the Data Model is more or less complete. From this point on, changes to the data model become exhausting, but unluckily they occur rather often. I've used different Object-Relational-Mappers (iBatis, Hibernate) for my projects. They definitely reduce the pain coming with Data Model changes, but they anyway let me adjust code/configuration at 3 or 4 places for every single change. To me, that's cumbersome and error prone. I made a better experience with DB4O, which simply persists Java Objects as they are, but I believe it's performance does not scale for huge applications. Is there anyway to maintain performance while letting out all the ugly configuration work? I'm seeking a performant framework which really hides persistence from my code. Wish for thinking? Or am I missing out THE technology? Hope you can help.

    Read the article

  • Upstart: best way for shutdown hook?

    - by Binarus
    Hi, since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much, Binarus

    Read the article

  • Requiring multithreading/concurrency for implementation of scripting language

    - by Ricky Stewart
    Here's the deal: I'm looking at designing my own scripting/interpreted language for fun. I'm only in the planning stages right now; I want to make sure I have a very strong hold on exactly how I will implement everything before I start coding. What I'm currently struggling with is concurrency. It seems to me like an easy way to avoid the unpredictable performance that comes with garbage collection would be to put the garbage collector in its own thread, and have it run concurrently with the interpreter itself. (To be clear, I don't plan to allow the scripts to be multithreaded themselves; I would simply put a garbage collector to work in a different thread than the interpreter.) This doesn't seem to be a common strategy for many popular scripting languages, probably for portability reasons; I would probably write the interpreter in the UNIX/POSIX threading framework initially and then port it to other platforms (Windows, etc.) if need be. Does anyone have any thoughts in this issue? Would whatever gains I receive by exploiting concurrency be nullified by the portability issues that will inevitably arise? (On that note, am I really correct in my assumption that I would experience great performance gains with a concurrent garbage collector?) Should I move forward with this strategy or step away from it?

    Read the article

  • Attend my Tech Ed 2014 session: Debugging Tips and Tricks

    - by Daniel Moth
    Just a week away, at Tech Ed 2014 NA in Houston Texas, I will be giving a demo presentation that you will not want to miss (assuming you code in Visual Studio). Add it to your calendar now: DEV-B352 Debugging Tips and Tricks in Visual Studio 2013 (link) Monday, May 12 1:15-2:30 PM, Room: General Assembly C As a developer, regardless of your programming language or the platform that you target, you use the debugger on a daily basis. Come to this all-demo session to learn how to make the most of the Visual Studio debugger, and hence be more productive and effective in your everyday development. We tour almost all of the debugger surface and many of its commands, throwing in tips and tricks as we go along, and also calling out what is brand new in the latest version of the debugger in Microsoft Visual Studio 2013. Whatever your experience level, you are guaranteed to leave with new knowledge of debugger features that you will want to use immediately when you are back at your computer!   I am also co-presenting another session later in the week. DEV-B313 Diagnosing Issues in Windows Phone 8.1 XAML Applications Using Visual Studio 2013 (link) Thursday, May 15 10:15-11:30 AM, Room: 340 Come to this demo-driven session to learn how to use the latest diagnostic tools in Visual Studio 2013 to make your Windows Phone 8.1 XAML apps reliable, fast, and efficient. Learn how to make the most of existing capabilities in the debugger as well as new debugging features for diagnosing correctness issues. Also, see the Visual Studio Performance and Diagnostics hub in action with its performance analysis tools for diagnosing CPU usage, memory usage, and energy consumption. The techniques covered in this session apply equally well for Windows Store apps as well as Windows Phone Store apps, so all your device development needs will be covered.   Links to both sessions from my Tech Ed speaker page. See you there! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Summary of the Solaris 11 webcast's livechat QnA session

    - by Karoly Vegh
    This is a followup post to the previous summary on the "What's new with Solaris 11 since the launch" webcast. That webcast has had a chatroom for a live Questions and Answers session running. I went through the archive of those and compiled a list of some of the (IMHO) most relevant and most frequently asked questions, I'd like to share. This is the first part, covering the QnA of Session I and II of the webcast, in a followup post we can have a look of the rest of the sessions if required - let me know in the comments. Also, should you have questions, as usual, feel free to ask those there, too.  ...and here come the answered questions:  When will Exadata be based on Solaris in place of Oracle Enterprise Linux?Exadata offers both Solaris 11 or Oracle Enterprise Linux.  The choice can be made at deployment time based on your OS needs.What are all other benefits and futures avilable in solaris 11 (cloud O.S.) compared to cloud based Red Hat Linux and Windows?suggest you check out our cloud white paper for a view of this. Also the OTN Solaris 11 page has some good articles. Here are the links:  http://www.oracle.com/technetwork/server-storage/solaris11/documentation/o11-106-sol11-cloud-501066.pdf http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.htmlWill 11.1 have a more complete IPS respository for Oracle and FOSS software?Yes, we are adding additional packages to the various package repositories. Since Solaris 11 was launched, both the Oracle Solaris Studio tools as well as Oracle Solaris Cluster have been made available along with numerous new FOSS packages. We will continue to be adding additional Oracle products and open source packages in the future. Will Exadata be based on Sparc in place of intel-amd x86 in next future ?We can't publically discuss futures, but we actually have a SPARC version of Exadata today, it's called SuperCluster, this is such a powerfull multipurpose system that it actually have multiple personalities built into one system: Exadata, Exalogic, and it can be a general purpose platform if you want. Have I understood this right? Livepatching KSplice-style is coming to Solaris 11 too?We're looking at that for certain types of Solaris patches in the future.Will there be a security framework like SST/JASS for Solaris 11?We can't talk about the future projects on a public forum, but we recognize the need for SST/JASS and want to address this as soon as possible. On the other side there are a whole bunch of "best practices" that are now embedded into Solaris 11 by default, so out of the box Solaris 11 should already address part of what SST/JASS gave you. (For example we did a lot of work on improving the auditing performance so that we can now have it turned on by default). On x86 can install VirtualBox in a Zone and use that to host other OSes.Yes, this was one of the first things we made sure would work when we acquired VirtualBox when we were still Sun Microsystems. If I have a Solaris 11 Control Domain on a T-series, can I run a Solaris 10 Ldom with Solaris 8 branded containers?Yes, you can.Is Oracle Solaris free or do we need to purchase?Solaris is free, the entitlement to run it comes either with a Sun system (new or historical) or for 3rd party systems the entitlement comes with a support contract. Note that for production use you will be expected to get a support contract. If you don't want to use the Solaris system (Sun or 3rd party) for production use (i.e. development) you can get an OTN license on the Oracle Technical Network website. Will encryption and deduplication both work on a share?This should work at the same time. What approaches does Solaris use to monitor usage?There are many different tools in Solaris to monitor usage. The main ones are the "stats" (vmstat, mpstat, prstat, ...), the kstat interface, and DTrace (to get details you couldn't see before). And then there are layered tools that can interface with these tools (Ops Center, BMC, CA, Tivoli, ...) Apart little-endian, big-endian how is it easy to port Solaris applications on Sparc to x86 and vice-versa ?Very easy. Except for certain hardware specific applications (those that utilize hardware specific drivers), all of the same Oracle Solaris APIs exist for all architectures. Is IPS based patching aware of the fact that zones can reside on ZFS and move from one physical server to another ?IPS is definitely aware of zones and uses ZFS to support boot environments for non-global zones in the same way that's used for the global zone. With respect to moving a zone from one physical server to another, Solaris 11 supports to the same zone attach/deattach method that was introduced in Solaris 10. Is vnic support in Ldoms planned?This is currently being investigated for a future LDOM release. Is it possible with the new patching system to build a system later with the same patch level as a system built a few months earlier?Yes, you can choose/define exactly which version should go to the system and it will always put the same bits in place. The technical answer is that you choose the version of the "entire" package you want on the system and the rest flows from there. Is it in the plans to allow zones to add/remove zpools to running zones dynamically in future updates?Work in this area is currently under investigation. Any plans to realese Solaris 11 source code? i.e. opensolaris?We currently can't comment on publicly releasing the source code. If you need/want this access please let your Oracle account team know. What about VirtualBox and Solaris11 for virtualization?Solaris 11 works great with VirtualBox, as both a client and a host system. Will Oracle DB software eventually be supplied as IPS packages? When?We don't have a date yet but this is actively being worked on. What are the new artifacts in Oracle Solaris 11 than the previous versions?There are quite a few actually. The best start is to look at our "Evaluate Solaris 11" page, and there you also can find a Transition Guide. http://www.oracle.com/technetwork/server-storage/solaris11/overview/evaluate-1530234.html So, this seems just like RedHat's YUM environment?IPS offers certain features beyond those in YUM or other packaging systems. For example, IPS works with ZFS and Solaris Boot Environments to provide a safe environment for software lifecycle management so that changes can be reverted by switching to an older boot environment. With Zones on solaris 11, can I do paravirtualitation?The great thing about zones is you don't *need* paravirtualization. You're making the same direct kernel calls that you would outside of a zone.  It's an incredibly significant performance win over hypervisor-based virtualization. Are zones/containers officially supported to run Oracle Databases?  EBIZ?Hi Calvin, the answer is yes, here is the support matrix for DB:  http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html I've found some nasty bugs in Solaris 11 (one of which today) that have been fixed in community forks (i.e., Illumos). Will Oracle ever restart collaboration with the community?We continue to work with the community, just not as open on all projects as we did before (For example IPS is an open project) and the source of more than half of the Solaris packages is posted on our opensource websites. I can't comment on what we will do in the future. And with regards to bugs please file them through the support organization and we will get them resolved. Is zpool vdev removal on-the fly now possible ?This issue is actively being investigated although we don't have a date for when this feature will be available. Is pgstat now the official replacement for corestat ?It's intended to provide similar functionality Where are the opensource website?For Oracle Solaris, visit http://www.oracle.com/technetwork/opensource/systems-solaris-1562786.html As a cloud-scale virtualization, is it going to be easier to move zones between machines? maybe even automatic in case of a hardware failure?Hi Gashaw, we already have customers that have implemented what they refer to as "flying zones" that they can move around very easily. They use Solaris Cluster to do this. What about VMware vMotion like feature?We have secure live migration with both Logical Domains on SPARC T series systems, and with Oracle VM on x86 systems. When running Solaris 10/11 on an enterprise server with a lot of zones, what are best practises commands to show the system is running fine? (has enough hardware resources). For example CPU / Memory / I/O / system load. What are the recommended values?For Solaris 11, look into the new zonestat(1M) command that provides a great deal of information about zone utilization. In addition, there is new work underway in providing additional observability in areas such as per-zone file system I/O. Java optimizations done with Solaris 11? For X86 platforms too? Where can I find more detail about this?There is lots of work that go into optimizing Java for Oracle Solaris 10 & 11 on both SPARC and x86. See http://www.oracle.com/technetwork/articles/servers-storage-dev/solarisforjavadevelop-168642.pdf What is meant by "ZFS Shadow Migration"?It's a way to migrate data from another file system to ZFS: http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html Is flash archive available with S11?Flash archive is not.  There is a procedure for disaster recovery, and we're working on a modern archive-based deployment tool for a future update.  The disaster recovery tool is here: http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-091-sol-dis-recovery-489183.html  You can also use Distribution Constructor to build common golden images. Will solaris 11 be available on the ODA soon?The idea's under evaluation -- we'll share your interest with the team. What steps can be taken to ensure that breaches of security are identified quickly?There are a number of tools, including the "bart" tool and "pkg verify" to ensure that software has not been compromised.  Solaris Audit can also be used to detect unauthorized access.  You can also use Immutable Zones to protect against compromise.  There are a wide variety of security tools, and I've covered only a few. What is the relation from solaris to java 7 speed optimization?There is constant work done between the Oracle Solaris and Java teams on performance optimizations. See http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html for examples. What is the difference in the Solaris 11 installation compared to solaris 10 ? where i can find the document describing basic repository concepts ?The best place to start is: http://www.oracle.com/technetwork/server-storage/solaris11/index.html Hope you found the post useful. For questions, input, requests for the second half of the QnA, please find the comment section below.  -- charlie  

    Read the article

  • What kinds of demos are good to make for a software engineer job

    - by user23012
    I have created my cv site and sent out my demos for a while now, but most of my demos are either from my course or games related since my course was a games programming course, I was wondering what kind of demos are good to show off my skills in programming in general. These are what i already have Pennies:just a simple game first coursework i did. Compiler:coursework for compiler writing module Pongout: basic a pong game in 68k using colour detection Snake: snake in 68k same thing as the pong Game Cube Maze: gamecube work BeatmyBot: basic Ai Basic plat-former game: 2d game with different types of collision Turing Lambda Simulation: my dissertation Turing machine simulated in Miranda. alpha and Beta reduction,and SKI calculus simulated in the Turing machine. What I am asking here is what kind of demos are good to add or have, i have been looking and have hit a tough spot I cant think of anything to make more than games. so for a general graduate software engineer what types would be good examples? EDIT: since responding to the comments bellow well for what languages well my main one would be C++, followed by Java, Erlang and abit of Haskell

    Read the article

  • links for 2011-02-14

    - by Bob Rhubart
    Glenn Fawcett: Solaris Eye for the Linux Guy, or how I learned to stop worrying about Linux and Love Solaris (Part 1) Glenn says: "This entry goes out to my Oracle techie friends that have been in the Linux camp for sometime now and are suddenly finding themselves needing to know more about Solaris… hmmmm… I wonder if this has anything to do with Solaris now being an available option with Exadata?"  (tags: linux solaris oracle) Enterprise Software Development with Java: High Performance JPA with GlassFish and Coherence - Part 2 Oracle ACE Director Markus Eisele describes "the steps you have to take to configure a JPA backed Cache with Coherence and how you could use it from within GlassFish as a high performance data store." (tags: oracle otn oracleace java glassfish coherence) TOGAF a Registered Trademark and Surpasses 15k Certifications EA Blogs Mike Walker relays news on the TOGAF standard. (tags: entarch togaf) Weblogic or wait? | Capping IT Off | Capgemini "So when would you move over to the new Oracle Technology?" asks Arjan Kramer. " Well, as always there can be several reasons..." (tags: oracle capgemini weblogic) Random Monday Thoughs (Art of SOA Governance) "Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure." - Terry Goldman (tags: oracle otn soa soagovernance)

    Read the article

  • format/build raid 5 with one 4k drive, three 512b

    - by skidawgz
    I have 4 WD 1TB drives which I want to 4x1TB Raid5. I am not sure what course of action to take next. How do I configure my 4th drive (sde) to align with the rest? Will this affect performance? I rcv this msg (which brings me here to ask these question): The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted. fdisk -l shows: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf324ba09 Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x38bcc1f0 Device Boot Start End Blocks Id System /dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x570f77e7 Device Boot Start End Blocks Id System /dev/sdd1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xeb665e7b Device Boot Start End Blocks Id System

    Read the article

  • Oracle and Cavium to work together on Java SE 8 on 64-bit ARMv8

    - by Henrik Stahl
    We have been working for some time on a standard Oracle JDK 8 port to the upcoming introduction of 64-bit servers based on the new ARMv8 micro architecture. At ARM TechCon 2013 in Santa Clara, California, we announced a roadmap with an expected GA in 2015. This project is going very well and is ahead of schedule. We will soon be at the point where we will make binaries available outside of Oracle - first in a managed beta program with select customers/partners, and sometime during the fall of 2014 as a public early access program. Unless something changes, we are looking at a early 2015 GA. We should be able to share a detailed ramp down and GA plan by JavaOne 2014. One of the things we (obviously) need to produce a high-quality port is hardware for development and QA. We are therefore happy to announce that we will be collaborating with Cavium on this project. Cavium has been a supporter of the Java ecosystem for a long time and we have numerous joint customers running various Java versions on Cavium MIPS and ARM-based hardware. Cavium has now agreed to provide us with development hardware and engineering resources so that we can certify and optimize the initial Oracle JDK 8 release on Cavium's ThunderX hardware. This is expected to improve quality and performance of JDK 8 on ARMv8 in general, as well as on Cavium's hardware. For more information: Cavium announcement on the ThunderX product family Cavium announcement on Oracle collaboration As a reminder, we plan to release the Oracle JDK 8 port to 64-bit ARMv8 under the royalty-free (for general purpose servers etc) Binary Code License, but we have no current plans to open source it.

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • How to design highly scalable web services in Java?

    - by Kshitiz Sharma
    I am creating some Web Services that would have 2000 concurrent users. The services are offered for free and are hence expected to get a large user base. In the future it may be required to scale up to 50,000 users. There are already a few other questions that address the issue like - Building highly scalable web services However my requirements differ from the question above. For example - My application does not have a user interface, so images, CSS, javascript are not an issue. It is in Java so suggestions like using HipHop to translate PHP to native code are useless. Hence I decided to ask my question separately. This is my project setup - Rest based Web services using Apache CXF Hibernate 3.0 (With relevant optimizations like lazy loading and custom HQL for tune up) Tomcat 6.0 MySql 5.5 My questions are - Are there alternatives to Mysql that offer better performance for what I'm trying to do? What are some general things to abide by in order to scale a Java based web application? I am thinking of putting my Application in two tomcat instances with httpd redirecting the request to appropriate tomcat on basis of load. Is this the right approach? Separate tomcat instances can help but then database becomes the bottleneck since both applications access the same database? I am a programmer not a Db Admin, how difficult would it be to cluster a Mysql database (or, to cluster whatever database offered as an alternative to 1)? How effective are caching solutions like EHCache? Any other general best practices? Some clarifications - Could you partition the data? Yes we could but we're trying to avoid it. We need to run a lot of data mining algorithms and the design would evolve over time so we can't be sure what lines of partition should be there.

    Read the article

  • Exalytics Increases Customer Revenue, and Saves Time, Risk & Cost

    - by Mike.Hallett(at)Oracle-BI&EPM
    We are getting some great proof point stories now from our customers who are succeeding with the Exalytics in-memory system for OBI and Essbase.  See below for some recent testimony: San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings: according to independent assessment by Mainstay Salire, the district is on track to achieve substantial benefits from the Oracle Exalytics solution, including an $8.25 million increase in attendance revenue, $75,000 a year savings in operational efficiencies, and $1 million in hardware cost avoidance. NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI: it took only “3 days to get up and running” with Exalytics.  Video Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: “it was up and running within 4 days” with “more intuitive dashboards” and “up to 70x better performance” and “cheaper maintenance and lower total cost of ownership”. Video Sodexo chose Oracle Exalytics as their business analytics platform; accelerating Essbase “more than 8x” performance for more than 2,000 Excel-addin users, “significantly changing how people in information management now deal with data”.  Video Polk, Savvis, Nykredit, and Key Energy describe testing of the Oracle Exalytics In-Memory Machine: to “reach more users than we ever have before”, “to fly through the data without impeding the analytic process”, “drive our enterprise groups into this tool instead of having departmental solutions”, and the “advanced visualisation this product enables”.  Video

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Oracle nomeada pela Forrester Leader em Enterprise Business Intelligence Platforms

    - by Paulo Folgado
    According to an October 2010 report from independent analyst firm Forrester Research, Inc., Oracle is a leader in enterprise business intelligence (BI) platforms. Forrester Research defines BI as a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information, which can then be used to enable more effective strategic, tactical, and operational insights and decision-making. Written by Forrester vice president and principal analyst Boris Evelson, The Forrester Wave: Enterprise Business Intelligence Platforms, Q4 2010 states that "Oracle has built new metadata-level [Oracle Business Intelligence Enterprise Edition 11g] integration with Oracle Fusion Middleware and Oracle Fusion Applications and continues to differentiate with its versatile ROLAP engine." The report goes on, "And in addition to closing some gaps it had in 10.x versions such as lack of RIA functionality, [the Oracle Business Intelligence Enterprise Edition 11g] actually leapfrogs the competition with the Common Enterprise Information Model (CEIM)--including the ability to define actions and execute processes right from BI metadata across BI and ERP applications." "We're pleased that the Forrester Wave recognizes Oracle Business Intelligence as a leading enterprise BI platform," said Paul Rodwick, vice president of product management, Oracle Business Intelligence. Key Innovations in Oracle Business Intelligence 11g Released in August 2010, Oracle Business Intelligence 11g represents the industry's most complete, integrated, and scalable suite of BI products. Encompassing thousands of new features and enhancements, the latest release offers three key areas of innovations. * A unified environment. The industry's first unified environment for accessing and analyzing data across relational, OLAP, and XML data sources. * Enhanced usability. A new, integrated scorecard application, plus innovations in reporting, visualization, search, and collaboration. * Enhanced performance, scalability, and security. Deeper integration with Oracle Enterprise Manager 11g and other components of Oracle Fusion Middleware provide lower management costs and increased performance, scalability, and security. Read the entire Forrester Wave Report.

    Read the article

  • Best way to make a shutdown hook?

    - by Binarus
    Since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-01

    - by Bob Rhubart
    Complexity of Social Computing - Is it a Consideration for EA's? | Pat Shepherd blogs.oracle.com Pat Shepherd asks, "Does Enterprise Architecture need to consider Social Computing in its scope?" Who should own the Enterprise Architecture? | Michael Glas blogs.oracle.com "Instead of looking at just who owns the architecture," suggests Michael Glas, "think about what the person/role/organization should do." The Application Architecture Domain | Michael Glas blogs.oracle.com Michael Glas asks—and answers: "As an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?" CAP Twelve Years Later: How the "Rules" Have Changed | Eric Brewer www.infoq.com The CAP theorem asserts that any net­worked shared-data system can have only two of three desirable properties. How­ever, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. Oracle DB with OEM in Amazon Cloud | Dr. Frank Munz www.munzandmore.com Dr. Frank Munz shares a video that screencast that explains "how to create an Oracle DB instance in AWS, how to enable OEM...and how to connect to your cloud instance with a local installation of NetBeans." Sample External Login.jsp page for Oracle Access Manager 11g | Brian Eidelman fusionsecurity.blogspot.com A-Team blogger Brian Eidelman expands on a previous post dealing with configuring OAM 11g to use an externally hosted custom login page. Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 coherence.oracle.com Date: Thursday, June 7, 2012 Time: 5:30pm – 9:00pm PT Where: Oracle Conference Center Room # 103 350 Oracle Parkway Redwood, Shores, CA Presentations: 6:00 p.m. - Coherence 101, The Evolution of Distributed Caching - Noah Arliss (Oracle) 7:00 p.m. - Optimizing Performance for Oracle Coherence and TopLink Grid at OOCL - Matt Rosen, Leo Limqueco (OOCL) 8:00 p.m. - Oracle Coherence Message Bus - Extreme Performance on Oracle Exalogic - Ballav Bihani (Oracle) Thought for the Day "I can't be left unsupervised." — Ron Wood (Born 06/01/1947 Source: Brainy Quote

    Read the article

  • Webcast - Set Your Sights on Enterprise 2.0 in the Cloud

    - by [email protected]
    To gain a competitive edge in your market, you need your business processes to be more collaborative, agile, and flexible to meet growing business demands. How can you make that happen? One way is to deploy portal, content management, and Enterprise 2.0 capabilities on a cloud infrastructure. According to top industry analysts, Enterprise 2.0 and cloud computing are two of the top three CIO initiatives in 2010. What are some of the advantages associated with deploying your Enterprise 2.0 initiatives in a cloud environment? Learn about the security, performance, and flexibility benefits that are available to you. Watch our complimentary live Webcast, Cloud Computing and Enterprise 2.0--Gain a Competitive Advantage, to get the answers you're looking for. Find out how Oracle pioneered the highly scalable and highly secure solutions that will enable you to: Quickly deploy on a cloud computing infrastructure that can scale as projects go viral Accelerate business processes, such as new product introduction, customer service, and new employee on-boarding Take advantage of best practices in cloud computing and Enterprise 2.0 implementations Join us for this LIVE webcast tomorrow as we show you how to achieve a higher level of performance and flexibility with Enterprise 2.0 and cloud computing. Register today for the live Webcast.

    Read the article

  • 2013 U.S. GAAP Financial Reporting Taxonomy Available for Public Review and Comment

    - by Theresa Hickman
    FASB recently released the proposed 2013 U.S. GAAP Reporting Taxonomy. Comments are due October 29, 2012 to be finalized and published early 2013.  The proposed 2013 U.S. GAAP taxonomy and instructions on how to submit comments are available at the FASB’s XBRL page. In previous blog entries, I talked about how Oracle Hyperion Disclosure Management supports the latest taxonomy, enabling financial managers to easily comply with the latest filing requirements. The taxonomy is a list of computer-readable tags in XBRL that allows companies to annotate the voluminous financial data that is included in typical long-form financial statements and related footnote disclosures. The tags allow computers to automatically search for, assemble, and process data so it can be readily accessed and analyzed by investors, analysts, journalists, and regulators. You do not have to have Oracle Hyperion Financial Management, used for consolidating financial results, to generate XBRL. You just need Oracle Hyperion Disclosure Management to generate XBRL instance documents from financial applications, such as Oracle E-Business Suite, Oracle PeopleSoft, Oracle JD Edwards EnterpriseOne, and Oracle Fusion General Ledger. To generate XBRL tags and complete SEC filings using your existing financial applications with Oracle Hyperion Disclosure Management, here are the steps: Download the XBRL taxonomy from the SEC or XBRL Website into Hyperion Disclosure Management to create a company taxonomy. Publish financial statements from the general ledger to Microsoft Excel or Microsoft Word. Create the SEC filing in the Microsoft programs and perform the XBRL tag mapping in Oracle Hyperion Disclosure Management. Ensure that the SEC filing meets XBRL and SEC EDGAR Filer Manual validation requirements. Validate and submit the company taxonomy and XBRL instance document to the SEC. Get more details about Oracle Hyperion Disclosure Management.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >