Search Results

Search found 7954 results on 319 pages for 'behavior driven developme'.

Page 84/319 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Oracle VM Moves into Challenger Position in the Latest Gartner Magic Quadrant

    - by Monica Kumar
    Oracle Innovations boost Oracle VM into Challenger Position in Gartner x86 Server Virtualization Infrastructure Magic Quadrant Oracle VM's placement in the just published Gartner x86 Server Virtualization Infrastructure Magic Quadrant affirms the Oracle strategy and is also supported by strong customer momentum gains. Optimizations delivered in Oracle VM releases during this last year along with easy software access and low cost licensing have moved Oracle’s placement into the Challenger quadrant in a very short time. Oracle continues to focus on delivering a strong integrated virtualization with Oracle VM and the managed stack in the following areas: Integrated management with Oracle VM and all layers of the Oracle stack from hardware to virtualization to cloud Application-Driven virtualization with Oracle VM templates for rapid enterprise application deployment Certified Oracle applications on Oracle VM Complete stack solution offering more values to customers Get a copy of the Magic Quadrant for x86 Server Virtualization Infrastructure report to read more about how Oracle VM rapidly moved up in its new position.

    Read the article

  • Android Layout Preview for NetBeans IDE

    - by Geertjan
    More often than not, the reason that Eclipse has more plugins than NetBeans IDE is because Eclipse has far less features out of the box. For example, thanks to its out of the box support, NetBeans IDE doesn't need a Maven plugin and it doesn't need a Java EE plugin, which are two of the most popular plugins for Eclipse. However, what would be great for NetBeans IDE to have is support for Android. It's existed for a while, thanks to the community-driven NBAndroid project, but without much desired GUI functionality. Today, the project announced a leap forward, that is, early results in providing a layout preview: Looking forward to more GUI functionality for this project!   

    Read the article

  • Application layer vs domain layer?

    - by Louis Rhys
    I am reading Domain-Driven Design by Evans and I am at the part discussing the layered architecture. I just realized that application and domain layers are different and should be separate. In the project I am working on, they are kind of blended and I can't tell the difference until I read the book (and I can't say it's very clear to me now), really. My questions, since both of them concerns the logic of the application and are supposed to be clean of technical and presentation aspects, what are the advantages of drawing a boundary these two?

    Read the article

  • Stopping duplicate H1 and title from dynamic content

    - by codemonkey
    I have a web site where there are lots of dynamically (database driven) created pages. These pages are basically used to show uploaded images The pages look a bit like this URL: http://www.mywebsite.com/page-id/page-title/ H1: View from the sea This is a big issue because I might have 10 other pages with the title: 'View from the sea'. I know the simple solution would be to make sure the pages are named differently but I have lots of users on the web site so it's not that simple. What do you guys think to putting the page-id with the page-title in the H1 tag? So it might read 437 - View from the sea. I need to differentiate the h1 titles. I think using the page-id would help but if anyone has a better solution that would be great! Thanks in advance

    Read the article

  • How can I get the business analysts more involved in BDD?

    - by Robert S.
    I am a proponent of Behavior Driven Development, mainly with Cucumber and RSpec, and at my current gig (a Microsoft shop) I am introducing SpecFlow as a tool to help with testing. I'd like to get the business analysts on my team involved in writing the features and scenarios, but they are put off by the "technical" aspect of it, meaning creating the files in Visual Studio (or even having Visual Studio on their machines). They want to know if we can put all the scenarios for a feature in Jira. What I'm looking for is suggestions for a workflow that will work well with BA types that are accustomed to project management/work tracking tools like Jira (we also use Greenhopper).

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • 2 New Resources Added to IT Strategies from Oracle Library

    - by Bob Rhubart
    IT Strategies from Oracle, an authorized library of guidelines and reference architectures, has just been updated to include two new documents: A Pragmatic Approach to Cloud Adoption For enterprises that seek to transform their own IT capabilities and avoid adverse disruption in the process, a structured and pragmatic approach to Cloud computing is required. This practitioner guide details a framework that can be used within any organization for developing such an approach to Cloud adoption. Oracle's Approach to Cloud Successful adoption of Cloud computing requires the definition of an approach that aligns with business drivers and operational capabilities. This is why Oracle has developed a pragmatic approach, based on experience with numerous companies, to help customers successfully adopt Cloud. This data sheet provides an executive overview of Oracle's proven approach to Cloud. These two new resources join a collection of dozens of documents covering Service-Oriented Architecture, Event-Driven Architecture, Business Process Management, and Cloud Computing. Registration is required to access the material, but it's all free.

    Read the article

  • How can we reduce downtime at the end of an iteration?

    - by Anna Lear
    Where I work we practice scrum-driven agile with 3-week iterations. Yes, it'd be nice if the iterations were shorter, but changing that isn't an option at the moment. At the end of the iteration, I usually find that the last day goes very slowly. The actual work has already been completed and accepted. There are a couple meetings (the retrospective and the next iteration planning), but other than that not much is going on. What sort of techniques can we as a team use to maintain momentum through the last day? Should we address defects? Get an early start on the next iteration's work anyway? Something else?

    Read the article

  • Pair programming and unit testing

    - by TheSilverBullet
    My team follows the Scrum development cycle. We have received feedback that our unit testing coverage is not very good. A team member is suggesting the addition of an external testing team to assist the core team, but I feel this will backfire in a bad way. I am thinking of suggesting pair programming approach. I have a feeling that this should help the code be more "test-worthy" and soon the team can move to test driven development! What are the potential problems that might arise out of pair programming??

    Read the article

  • Dynamic tests with mstest and T4

    - by Victor Hurdugaci
    If you used mstest and NUnit you might be aware of the fact that the former doesn't support dynamic, data driven test cases. For example, the following scenario cannot be achieved with the out-of-box mstest: given a dataset, create distinct test cases for each entry in it, using a predefined generic test case. The best result that can be achieved using mstest is a single testcase that will iterate through the dataset. There is one disadvantage: if the test fails for one entry in the dataset, the whole test case fails. So, in order to overcome the previously mentioned limitation, I decided to create a text template that will generate the test cases for me. As an example, I will write some tests for an integer multiplication function that has 2 bugs in it: Read more >> [Cross post from victorhurdugaci.com]

    Read the article

  • MediaWiki: how to make DISPLAYTITLE be used in categories listings

    - by Konstantin Boyandin
    The problem: a MediaWiki-driven site uses subpages to build pages hierarchy. When I add something like Page1/Page2/Subpage the exactly above string appears in listings and looks clumsy. I can't efficiently use short subpage title (Subpage in this example), since it can appear in different contexts and could confuse users. I can use DISPLAYTITLE magic word, with proper values of $wgRestrictDisplayTitle and $wgAllowDisplayTitle, to reassign page title and make it show on the page. However, when I look into categories listing this page, I will still see "Page1/Page2/Subpage" instead of the title assigned. Is there a simple way (through 'hack' or via relevant extension) to make the new title appear in every listing as well?

    Read the article

  • Increase productivity, accelerate work-to-cash cycles, and reduce overall firm and client risk with

    Law firms around the world are faced with increasing pressures to do business faster and more efficiently. Learn how firms can automate manual, paper-driven processes, ensure regulatory compliance, integrate systems and offices brought together by mergers and acquisitions, and take on new business quickly and efficiently. Understand how firms can automate manual tasks with Oracle's Whitehill One; get invoices out the door faster with Whitehill Enterprise; and can go green with Whitehill Pre-Bill. In this session, you will hear about Oracle's new legal services offerings that accelerate work-to-cash cycles, increase productivity, and reduce overall firm and client risk.

    Read the article

  • Consuming ASMX and WCF Services using jQuery

    - by bipinjoshi
    In the previous part I demonstrated how jQuery animations can add some jazz to your web forms. Now let's see one of the most important feature of jQuery that you will probably use in all data driven websites - accessing server data. In the previous articles you used jQuery methods such as $.get() to make a GET request to the server. More powerful feature, however, is to make AJAX calls to ASP.NET Web Services, Page Methods and WCF services. The $.ajax() method of jQuery allows you to access these services. In fact $.get() method you used earlier internally makes use of $.ajax() method but restricts itself only to GET requests. The $.ajax() method provides more control on how the services are called.http://www.bipinjoshi.net/articles/479571df-7786-4c50-8db6-a798f195471a.aspx

    Read the article

  • How-do-I Script Sample Videos

    - by Jialiang
    http://blogs.technet.com/b/onescript/archive/2012/10/14/how-do-i-script-sample-videos.aspx All-In-One Script Framework is featured by customer-driven script samples.  Each sample demonstrates how to automate one specific IT task that is frequently asked in TechNet forums, Microsoft support calls, and social media.   In order to give readers a better and quicker learning experience, the team starts to create short 5- to 10- minute videos to visually demonstrate some script samples.  These videos would show you how to accomplish the task by running the script sample, and illustrate some key script snippets in the sample project.  We sincerely hope that the IT Pro community will love our effort. The first how-do-I video has been published.  It demonstrates one of our recently released Windows 8 script sample: Get Network Adapter Properties in Windows 8 The video is embedded in the sample introduction page.

    Read the article

  • Right mix of planning and programing on a new project

    - by WarrenFaith
    I am about to start a new project (a game, but thats unimportant). The basic idea is in my head but not all the details. I don't want to start programming without planning, but I am seriously fighting my urge to just do it. I want some planning before to prevent refactoring the whole app just because a new feature I could think of requires it. On the other hand, I don't want to plan multiple months (spare time) and start that because I have some fear that I will lose my motivation in this time. What I am looking for is a way of combining both without one dominating the other. Should I realize the project in the way of scrum? Should I creating user stories and then realize them? Should I work feature driven? (I have some experience in scrum and the classic "specification to code" way.)

    Read the article

  • Programming methodologies at stackoverflow

    - by Prototype Stark
    I am in the middle of starting up a software company where we would use ASP.NET MVC and ASP.NET WebAPI extensively at shop. We will be a group of 4 and no more than 10 will work on any particular project at any point in time(these are ground rules). I would like to know, what programming methodologies best suit a small(guerilla) team. Specifically, I would also like to know which ones are being used at famous ASP.NET MVC shops like Stackoverflow. The ones I know are: Scrum and Waterfall(I know its bad). But what's the recommended way of development for smaller, group of 9-10 team. Also, will Test Driven Development help such a team in producing quality software? Are there any other techniques the team will have to know to be good at producing quality software?

    Read the article

  • If you had two projects with the same specification and only one was developed using TDD how could you tell?

    - by Andrew
    I was asked this question in an interview and it has been bugging me ever since. You have two projects, both with the same specification but only one of these projects was developed using Test Driven Development. You are given the source for both but with the tests removed from the TDD project. How can you tell which was developed using TDD? All I was able to muster up was something about the classes being more 'broken up' in to smaller chunks and having more visible APIs, not my proudest moment. I would be very interested to hear a good answer to this question.

    Read the article

  • Are your merchandise systems limiting growth? Oracle Retail's Merchandise Operations Management could be the answer

    - by user801960
    In this video, Lara Livgard, Director of Oracle Retail Strategy, introduces Oracle Retail Merchandise Operations Management (MOM), a set of integrated, modular solutions that support buying, pricing, inventory management and inventory valuation across a retailer’s channels, countries, and business models. MOM is the backbone of successful retail operations, providing timely and accurate visibility across the entire enterprise and enabling efficient supply-chain execution driven by plans and forecasts. It's modular architecture facilitates tailored and high-value implementations, giving retailers the information they need in order to offer a quality customer experience through a truly integrated multi-channel approach. Further information is available on the Oracle Retail website regarding Merchandise Operations Management.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • How to handle loading and keeping many bitmaps in an Android 2D game

    - by Lumis
    In an Android 2D game which is using SurfaceView where its onDraw is driven by a loop from a Thread, I use many bitmap sprites (sprite sheets) and two background size bitmaps, which are all loaded into memory at the start. It all works fine, however, when the activity is onPause or after reloading it few times, Android shows a tendency to wipe out the big bitmaps only, probably to free memory. Sometimes this happens even in the middle of loading this very activity. In order to counter this, I made a check in the onDraw method to test if the big bitmaps are still there and reload them if they are forcefully recycled by Android, before drawing them on Canvas. This solution may not be the most stable, and since I know that there are much more accomplished android game programmers here than myself, I hope you can reveal some tricks or secrets or at least provide some good hints, how to overcome this.

    Read the article

  • Process development lifecycle in Oracle BPM 11g

    - by mesriniv
       Oracle BPM 11g platform provides two modeling tools tailored to different audience. The BPM Process Composer component is a web-based, role-driven, collaborative platform for discovery, design and documentation of business processes aimed at business audience. It empowers the business user to participate in the definition, feedback and design of business processes. The other modeling tool is Oracle BPM Studio that runs in the JDeveloper IDE .  Irrespective of the tool used, same BPMN and related artifacts are authored - that is , this is not import/export but just multiple tools working with same assets. In addition to BPMN 2.0, both tools provides editors for process data, organizational roles, human tasks (including assignment and user interface), business rules. The Oracle BPM design-time repository (Oracle Metadata Services Repository) is the glue that facilitates shared work environment across multiple BPM Composer and Studio clients.This document explains how to create snapshots and versions of your BPM projects and captures best practices for shared process development lifecycle. http://java.net/projects/oraclebpmsuite11g/downloads/directory/Samples/bpm-122-processdevelopment-lifecycle

    Read the article

  • migrating product and team from startup race to quality development

    - by thevikas
    This is year 3 and product is selling good enough. Now we need to enforce good software development practices. The goal is to monitor incoming bug reports and reduce them, allow never ending features and get ready for scaling 10x. The phrases "test-driven-development" and "continuous-integration" are not even understood by the team cause they were all in the first 2 year product race. Tech team size is 5. The question is how to sell/convince team and management about TDD/unit testing/coding standards/documentation - with economics. train the team to do more than just feature coding and start writing test units along - which looks like more work, means needs more time! how to plan for creating units for all backlog production code

    Read the article

  • Can it be useful to build an application starting with the GUI?

    - by Grant Palin
    The trend in application design and development seems to be starting with the "guts": the domain, then data access, then infrastructure, etc. The GUI seems to usually come later in the process. I wonder if it could ever be useful to build the GUI first... My rationale is that by building at least a prototype GUI, you gain a better idea of what needs to happen behind the scenes, and so are in a better position to start work on the domain and supporting code. I can see an issue with this practice in that if the supporting code is not yet written, there won't be much for the GUI layer to actually do. Perhaps building mock objects or throwaway classes (somewhat like is done in unit testing) would provide just enough of a foundation to build the GUI on initially. Might this be a feasible idea for a real project? Maybe we could add GDD (GUI Driven Development) to the acronym stable...

    Read the article

  • E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

    - by Giri Mandalika
    Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view. THREADS parameter The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention. eg., Set a value for THREADS parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'THREADS'; COMMIT; (I am not aware of any maximum value for THREADS parameter) CHUNK_SIZE parameter The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time. When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted. It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found. eg., Set a value for CHUNK_SIZE parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'CHUNK_SIZE'; COMMIT; CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000. CHUNK SHUFFLE parameter By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example). It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured. eg., Set chunk shuffling as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = 'Y' WHERE PARAMETER_NAME = 'CHUNK SHUFFLE'; COMMIT; Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.     My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

    Read the article

  • Java EE@Princeton Java Meetup

    - by reza_rahman
    On November 28th, I spoke at the Princeton Java Meetup Group. It's a well-organized group led by veteran Java champion Yakov Fain - I have spoken there numerous times. I did my Java EE 6 DDD talk (the same one from Java2Days 2012). Domain Driven Design with Java EE 6 from Reza Rahman The code examples are available here: https://blogs.oracle.com/reza/resource/dddsample.zip. Give me a shout if you would like to get it up and running. The talk went very well -- the official RSVP shows 33 attended. I gave away a few GlassFish T-shirts, laptop stickers and Arun Gupta's Java EE 6 pocket guide. More details on the talk here. I most certainly look forward to speaking there again.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >