Search Results

Search found 17227 results on 690 pages for 'oracle hcm cloud'.

Page 535/690 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Using Solaris pkg to list all setuid or setgid programs

    - by darrenm
    $ pkg contents -a mode=4??? -a mode=2??? -t file -o pkg.name,path,mode We can also add a package name on the end to restrict it to just that single package eg: $ pkg contents -a mode=4??? -a mode=2??? -t file -o pkg.name,path,mode core-os PKG.NAME PATH MODE system/core-os usr/bin/amd64/newtask 4555 system/core-os usr/bin/amd64/uptime 4555 system/core-os usr/bin/at 4755 system/core-os usr/bin/atq 4755 system/core-os usr/bin/atrm 4755 system/core-os usr/bin/crontab 4555 system/core-os usr/bin/mail 2511 system/core-os usr/bin/mailx 2511 system/core-os usr/bin/newgrp 4755 system/core-os usr/bin/pfedit 4755 system/core-os usr/bin/su 4555 system/core-os usr/bin/tip 4511 system/core-os usr/bin/write 2555 system/core-os usr/lib/utmp_update 4555 system/core-os usr/sbin/amd64/prtconf 2555 system/core-os usr/sbin/amd64/swap 2555 system/core-os usr/sbin/amd64/sysdef 2555 system/core-os usr/sbin/amd64/whodo 4555 system/core-os usr/sbin/prtdiag 2755 system/core-os usr/sbin/quota 4555 system/core-os usr/sbin/wall 2555

    Read the article

  • MDM 2010 Summit in San Francisco

    - by Tony Ouk
    Since 2006, the MDM Global Summit Series has brought master data expertise to more than 5,000 delegates worldwide. The Series is designed to reinforce the importance of data governance as a key factor to your MDM program's success while providing real-world experience and all-in-one access to solutions providers. Come join us June 2-3, 2010 at the Hyatt Regency in San Francisco.  For more information including registration details, visit the MDM Global Summit Series website.

    Read the article

  • Article about Sun ZFS Storage Appliances

    - by Owen Allen
    Sun ZFS Storage Appliances are versatile storage systems. Discovering and managing them in Ops Center, though, makes them even more versatile. If you discover a Sun ZFS Storage Appliance in Ops Center 12c, you can create iSCSI and Fibre Channel LUNS, and make the LUNs available to server pools and virtualization hosts as a storage library. Barbara Higgins has written an excellent article that walks you through the process of setting up a Sun ZFS Storage Appliance and discovering and managing it in Ops Center. If you're looking into ways to make a Sun ZFS Storage Appliance work for you, it's worth a look.

    Read the article

  • How To: Use Monitoring Rules and Policies

    - by Owen Allen
    One of Ops Center's most useful features is its asset monitoring capability. When you discover an asset - an operating system, say, or a server - a default monitoring policy is applied to it, based on the asset type. This policy contains rules that specify what properties are monitored and what thresholds are considered significant. Ops Center will send a notification if a monitored asset passes one of the specified thresholds. But sometimes you want different assets to be monitored in different ways. For example, you might have a group of mission-critical systems, for which you want to be notified immediately if their file system usage rises above a specific threshold. You can do so by creating a new monitoring policy and applying it to the group. You can also apply monitoring policies to individual assets, and edit them to meet the requirements of your environment. The Tuning Monitoring Rules and Policies How-To walks you through all of these procedures.

    Read the article

  • Deloitte IFRS Seminar for Oil and Gas Industries

    - by Theresa Hickman
    What: Deloitte will be giving an educational program that explores IFRS in the Oil & Gas industry. This two-day event will be more of a technical training on how to implement IFRS from an accounting perspective where participants will work through journal entries. This training will provide CPE credits and include breakout sessions. They will cover the following IFRS topics: Derivatives & Financial Instruments Income Taxes Regulatory Update State of the Industry Asset Retirement Obligations Joint Ventures Revenue Recognition When: June 16 and 17, 2010 Where: Omni Houston Hotel (Houston, TX) To learn more and register for this exciting event, visit this webpage.

    Read the article

  • Create Your CRM Style

    - by Ruth
    Company branding can create a sense of spirit, belonging, familiarity, and fun. CRM On Demand has long offered company branding options, but now, with Release 17, those options have become quicker, easier, and more flexible. Themes (also known as Skins) allow you to customize the appearance of the CRM On Demand application for your entire company, or for individual roles. Users may also select the theme that works best for them. You can create a new theme in 5 minutes or less, but if you're anything like me, you may enjoy tinkering with it for a while longer. Before you begin tinkering, I recommend spending a few moments coming up with a design plan. If you have specific colors or logos you want for your theme, gather those first...that will move the process along much faster. If you want to match the color of an existing Web site or application, you can use tools, like Pixie, to match the HEX/HTML color values. Logos must be in a JPEG, JPG, PNG, or GIF file format. Header logos must be approximately 70 pixels high by 1680 pixels wide. Footer logos must be no more than 200 pixels wide. And, of course, you must have permission to use the images that you upload for your theme. Creating the theme itself is the simple part. Here are a few simple steps. Note: You must have the Manage Themes privilege to create custom themes. Click the Admin global link. Navigate to Application Customization Themes. Click New. Note: You may also choose to copy and edit and existing theme. Enter information for the following fields: Theme Name - Enter a name for your new theme. Show Default Help Link - Online help holds valuable information for all users, so I recommend selecting this check box. Show Default Training and Support Link - The Training and Support Center holds valuable information for all users, so I recommend selecting this check box. Description - Enter a description for your new theme. Click Save. Once you click Save, the Theme Detail page opens. From there, you can design your theme. The preview shows the Home, Detail, and List pages, with the new theme applied. For more detailed information about themes, click the Help link from any page in CRM On Demand Release 17, then search or browse to find the Creating New Themes page (Administering CRM On Demand Application Customization Creating New Themes). Click the Show Me link on that Help page to access the Creating Custom Themes quick guide. This quick guide shows how each of the page elements are defined.

    Read the article

  • Protect Data and Save Money? Learn How Best-in-Class Organizations do Both

    - by roxana.bradescu
    Databases contain nearly two-thirds of the sensitive information that must be protected as part of any organization's overall approach to security, risk management, and compliance. Solutions for protecting data housed in databases vary from encrypting data at the application level to defense-in-depth protection of the database itself. So is there a difference? Absolutely! According to new research from the Aberdeen Group, Best-in-Class organizations experience fewer data breaches and audit deficiencies - at lower cost -- by deploying database security solutions. And the results are dramatic: Aberdeen found that organizations encrypting data within their databases achieved 30% fewer data breaches and 15% greater audit efficiency with 34% less total cost when compared to organizations encrypting data within applications. Join us for a live webcast with Derek Brink, Vice President and Research Fellow at the Aberdeen Group, next week to learn how your organization can become Best-in-Class.

    Read the article

  • Don't Miss What Procurement Experts Are Talking About. Join the Webcasts starting next week!

    - by LuciaC
    The Procurement team have three Advisor Webcasts scheduled in December with information about new features, tips and tricks and troubleshooting guidance. New Features and enhancements Incorporated in the Procurement Rollup Patch 14254641:R12.PRC_PF.B December 4, 2012 at 14:00 London / 16:00 Egypt / 06:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis session is recommended for technical and functional users who need to know about the new features and enhancements incorporated in the Procurement Rollup Patch. Topics will include: GCPA Enable All Sites E-Mail PO - .LANGUAGE Read Only BWC Validate Document GBPA OSP Items GL Date Defaulting Cancel Refactoring Action History Cleanup Click here to register for this event. Approval Management Engine (AME) New Features, Setup and Use for Purchase Orders December 6, 2012 at 14:00 London / 16:00 Egypt / 06:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis is recommended for Functional Users and Application Technical Users who work in the Procurement Module including Purchasing and iProcurement and would like to know more about how to set up and use the Approval Management Engine (AME) for purchase orders.Topics will include: Scope and limitations of AME functionality for purchase orders Setup and use of AME for purchase orders PO Review and PO E-Sign new features Demonstration: Example of scenarios using the new features Click here to register for this event. How to Solve Approval Errors in Procurement December 18, 2012 at 4:00 pm Egypt / 2:00 pm London / 6:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis session is recommended for technical and functional users who need to know about how to diagnose and troubleshoot common Approval Errors.Topics will include: Basic mandatory setups for approvals of PO documents Differences between Purchase Order Approval and Requisition Approval Process. Troubleshooting of Approval Errors. Basic Setup of AME which can be used in Requisition Approval Process. Click here to register for this event. You can see a listing of all scheduled and archived webcasts from Doc ID 740966.1.  Select the product you are interested in (such as E-Business Suite Procurement) and this will take you to the webcast listing for the product.

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Nimbus Tweaking Help Needed

    - by Geertjan
    I was reading this new article on Synthetica and NetBeans RCP this morning, when I remembered this screenshot from Henry Arousell from Sweden: Here, Nimbus is heavily being used, highlighting 6 areas where Henry would really benefit from any help regarding how the foreground properties should be set: The color of the main menu (and its subsequent unfolded menu options) The TopComponent tab colors (as you can see from the screenshot, they've managed to change the foreground colors of the ones in the editor mode by setting the Nimbus property "TextText"). They cannot manipulate TopComponents in other modes, though. Table header foreground colors The foreground color of any provided composite component, like a JFileChooser or the ICEPdf viewer or any other panel with label components. The progress bar message color. The status bar message color, A Nimbus expert is needed to help here, though it seems to me that some of the solutions have already been identified, or are similar, in the article pointed out above.

    Read the article

  • Embedding BIP just got easier!

    - by Tim Dexter
    To make up for yesterday's documentation faux pas, when I referenced Kan's blog as a source for the debug feature over Leslie's fine official documentation that had come out some time before Kan's blog. Apologies again Leslie! I noticed another new feature that I was unaware of in the New Features guide for 10.1.3.4.1. The ability to remove the BI Publisher header from the user interface so you can get this: instead of this. Useful? If you wanted to host BIP inside another application such as a portal or custom app, you can make the BIP UI app look much more integrated. By default you still get our 'blue' look and feel' but I have documented else where how you can change that. For instance, now you can have BIP hosted inside a BIEE instance and provide access to all the reports a user might wish to run with very little effort rather than picking and choosing what to bubble up to the dashboard. How do you do it? Get on over to the New Features guide and find out, there are some other goodies there too. Note to self, RTFM!

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    If you ever need to bring up a computer with InfiniBand networking capabilities and diagnostic tools, without even going through any installation on its hard disk, then please read on. In this article, I am going to talk about how to boot a computer over the network using PXE and have IPoIB enabled. Of course, the computer must have a compatible InfiniBand Host Channel Adapter (HCA) installed and connected to your IB network already. [ Read More ]

    Read the article

  • The Latest JD Edwards World Technical Enhancements

    Tom Carrell, Principal Product Strategy Manager and Mike Jepkes, Senior Technical Development Manager for JD Edwards World products discuss with Cliff how customers can take full advantage of web enablement, service enablement along with many other new JD Edwards World technical enhancements.

    Read the article

  • Inverted schedctl usage in the JVM

    - by Dave
    The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory - the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying -- threads piling up on a critical section behind a preempted lock-holder -- and other lock-related performance pathologies. If you're interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure -- the schedctl block -- that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a "preemption pending" flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can't abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading : US05937187 and various papers by Andy Tucker. We'll now switch topics to the implementation of the "synchronized" locking construct in the HotSpot JVM. If a lock is contended then on multiprocessor systems we'll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were "free" then we'd never spin to avoid switching, but that's not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we'll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I've found it useful to enable schedctl to request deferral while spinning. But while spinning I've arranged for the code to periodically check or poll the "preemption pending" flag. If that's found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time.

    Read the article

  • Long Running Service Request or COLD CASE?

    - by chris.warticki
    What's going on? Why is it taking so long? Is anyone out there? Resolving Service Requests can seem to take forever. If your Service Request is taking more than a few days, moving into weeks or months, here are few things to consider.  Details here.  Comments welcome. -Chris Warticki twittering @cwarticki Join one of the Twibes - http://twibes.com/OracleSupport or http://twibes.com/MyOracleSupport

    Read the article

  • How Do You Make Your Animated GIFs?

    - by thatjeffsmith
    I get this question a lot. The question tells me a few things: you LIKE the animated GIFs here on thatjeffsmith – cool, I’ll keep doing more you want to make your own – awesome, I’m helping make the world a more animated place that’s pretty much it, I should have said a couple of things, oh well I use Camtasia Studio 7 from TechSmith A gif of me making a gif If you want a more official ‘answer’ to this question, the cool folks at Techsmith have their own blog post on the subject.

    Read the article

  • The Social Business Thought Leaders - Steve Denning

    - by kellsey.ruppel
    How is the average organization doing? Not very well according to a number of recent books and reports. A few indicators provide quite a gloomy picture: Return on assets and invested capitals dropped to 25% of its value in 1965 in the entire US market (see The Shift Index by John Hagel) Firms are dying faster and faster with the average lifespan of companies listed in the S&P 500 index gone from 67 years in the 1920s to 15 years today (see Creative Disruption by Richard Foster) Employee engagement ratio, a high level indicator of an organization’s health proved to affect performance outcomes, does not exceed on average 20%-30% (see Employee Engagement, Gallup or The Engagement Gap, Towers Perrin) In one of the most enjoyable keynotes of the Social Business Forum 2012, Steve Denning (Author of Radical Management and Independent Management Consultant) explained why this is happening and especially what leaders should do to reverse the worrying trends. In this Social Business Thought Leaders series, we asked Steve to collapse some key suggestions in a 2 minutes video that we strongly recommend. Steve discusses traditional management - that set of principles and practices born in the early 20th century and largely inspired by thinkers such as Frederick Taylor and Henry Ford - as the main responsible for the declining performance of modern organizations. While so many things have changed in the last 100 or so years, most companies are in fact still primarily focused on maximizing profits and efficiency, cutting costs, coordinating individuals top-down through command and control. The issue is, in a knowledge intensive, customer centred, turbulent market like the one we are experiencing, similar concepts are not just alienating employees' passion but also destroying the last source of competitive differentiation left: creativity and the innovative potential. According to Steve Denning, in a phase change from old industrial to a creative, collaborative, knowledge economy, the answer is hidden in a whole new business ecosystem that puts the individual (both the employee and the customer) at the center of the organization. He calls this new paradigm Radical Management and in the video interview he articulates the huge challenges and amazing rewards our enterprises are facing during this inevitable transition.

    Read the article

  • VAD, spécialisez-vous

    - by swalker
    À compter du 1er novembre 2011, les VAD titulaires d'un Contrat de distributeur à Valeur Ajoutée en bonne et due forme ne seront plus tenus, pour décrocher le titre Specialized, de satisfaire les exigences de références client indiquées dans la section des critères professionnels. Ils doivent néanmoins continuer de satisfaire tous les autres critères professionnels et de compétences édictés dans la Zone de connaissances concernée avant de pouvoir obtenir la validation de leur spécialisation.

    Read the article

  • The Unspoken - The Why of GC Ergonomics

    - by jonthecollector
    Do you use GC ergonomics, -XX:+UseAdaptiveSizePolicy, with the UseParallelGC collector? The jist of GC ergonomics for that collector is that it tries to grow or shrink the heap to meet a specified goal. The goals that you can choose are maximum pause time and/or throughput. Don't get too excited there. I'm speaking about UseParallelGC (the throughput collector) so there are definite limits to what pause goals can be achieved. When you say out loud "I don't care about pause times, give me the best throughput I can get" and then say to yourself "Well, maybe 10 seconds really is too long", then think about a pause time goal. By default there is no pause time goal and the throughput goal is high (98% of the time doing application work and 2% of the time doing GC work). You can get more details on this in my very first blog. GC ergonomics The UseG1GC has its own version of GC ergonomics, but I'll be talking only about the UseParallelGC version. If you use this option and wanted to know what it (GC ergonomics) was thinking, try -XX:AdaptiveSizePolicyOutputInterval=1 This will print out information every i-th GC (above i is 1) about what the GC ergonomics to trying to do. For example, UseAdaptiveSizePolicy actions to meet *** throughput goal *** GC overhead (%) Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) Tenuring threshold: (attempted to decrease to balance GC costs) = 1 GC ergonomics tries to meet (in order) Pause time goal Throughput goal Minimum footprint The first line says that it's trying to meet the throughput goal. UseAdaptiveSizePolicy actions to meet *** throughput goal *** This run has the default pause time goal (i.e., no pause time goal) so it is trying to reach a 98% throughput. The lines Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) say that we're currently spending about 16% of the time doing young GC's and about 5% of the time doing full GC's. These percentages are a decaying, weighted average (earlier contributions to the average are given less weight). The source code is available as part of the OpenJDK so you can take a look at it if you want the exact definition. GC ergonomics is trying to increase the throughput by growing the heap (so says the "attempted to grow"). The last line Tenuring threshold: (attempted to decrease to balance GC costs) = 1 says that the ergonomics is trying to balance the GC times between young GC's and full GC's by decreasing the tenuring threshold. During a young collection the younger objects are copied to the survivor spaces while the older objects are copied to the tenured generation. Younger and older are defined by the tenuring threshold. If the tenuring threshold hold is 4, an object that has survived fewer than 4 young collections (and has remained in the young generation by being copied to the part of the young generation called a survivor space) it is younger and copied again to a survivor space. If it has survived 4 or more young collections, it is older and gets copied to the tenured generation. A lower tenuring threshold moves objects more eagerly to the tenured generation and, conversely a higher tenuring threshold keeps copying objects between survivor spaces longer. The tenuring threshold varies dynamically with the UseParallelGC collector. That is different than our other collectors which have a static tenuring threshold. GC ergonomics tries to balance the amount of work done by the young GC's and the full GC's by varying the tenuring threshold. Want more work done in the young GC's? Keep objects longer in the survivor spaces by increasing the tenuring threshold. This is an example of the output when GC ergonomics is trying to achieve a pause time goal UseAdaptiveSizePolicy actions to meet *** pause time goal *** GC overhead (%) Young generation: 20.74 (no change) Tenured generation: 31.70 (attempted to shrink) The pause goal was set at 50 millisecs and the last GC was 0.415: [Full GC (Ergonomics) [PSYoungGen: 2048K-0K(26624K)] [ParOldGen: 26095K-9711K(28992K)] 28143K-9711K(55616K), [Metaspace: 1719K-1719K(2473K/6528K)], 0.0758940 secs] [Times: user=0.28 sys=0.00, real=0.08 secs] The full collection took about 76 millisecs so GC ergonomics wants to shrink the tenured generation to reduce that pause time. The previous young GC was 0.346: [GC (Allocation Failure) [PSYoungGen: 26624K-2048K(26624K)] 40547K-22223K(56768K), 0.0136501 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] so the pause time there was about 14 millisecs so no changes are needed. If trying to meet a pause time goal, the generations are typically shrunk. With a pause time goal in play, watch the GC overhead numbers and you will usually see the cost of setting a pause time goal (i.e., throughput goes down). If the pause goal is too low, you won't achieve your pause time goal and you will spend all your time doing GC. GC ergonomics is meant to be simple because it is meant to be used by anyone. It was not meant to be mysterious and so this output was added. If you don't like what GC ergonomics is doing, you can turn it off with -XX:-UseAdaptiveSizePolicy, but be pre-warned that you have to manage the size of the generations explicitly. If UseAdaptiveSizePolicy is turned off, the heap does not grow. The size of the heap (and the generations) at the start of execution is always the size of the heap. I don't like that and tried to fix it once (with some help from an OpenJDK contributor) but it unfortunately never made it out the door. I still have hope though. Just a side note. With the default throughput goal of 98% the heap often grows to it's maximum value and stays there. Definitely reduce the throughput goal if footprint is important. Start with -XX:GCTimeRatio=4 for a more modest throughput goal (%20 of the time spent in GC). A higher value means a smaller amount of time in GC (as the throughput goal).

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >