Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 282/925 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Are "Compile to JavaScript" Frameworks Hostile to Continuous Integration?

    - by joshin4colours
    Lately we've been looking at ways to improve automated testing and related tooling of our enterprise-level GWT web app. I've realized that in some ways, GWT is a bit hostile to automated testing, mainly because of the nature of the long GWT compile times from Java to JS. This makes unit testing somewhat challenging, but it also puts some roadblocks up for testing in a CI environment. I've also found out that some of our build and deployment processes are somewhat complicated due to the nature of GWT's compile process. Is this a general problem for "compile to JS" frameworks for webapps? I don't have much experience with them, but I can see some potential problems for automated testing and continuous integration and deployment. Some issues I see: Long build and compile times preventing quick deployments Language the app is developed in != JS, preventing good unit testing Obfuscated JS in the actual app makes it more like a executable than a web app Are these issues present in other similar frameworks, or is this more a GWT issue?

    Read the article

  • SQL Saturday #294 - Philadelphia

    SQL Saturday is coming to Philadelphia on June 7, 2014. This event is a free day of training and networking for SQL Server Professionals, organized by the Philadelphia SQL Server User Group. The event also features two paid-for Precons, one presented by Allan Hirt and the other presented jointly by Joseph D'Antoni and Stacia Misner. Register while space is available. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • Crystal Reports: 3 New Uses For Sub Reports

    I hate sub reports and always consider them the last resort in any reporting solution. The negative effect on performance and maintainability is just not worth the easy ride they give the report writer. Nine times out of ten reporting requirements can be met using a little forethought and planning (and a solid understanding of formulas). With that said, there are a few novel ways of using sub reports which will not affect performance and actually prove a boon to the developer.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Crystal Reports: 3 New Uses For Sub Reports

    I hate sub reports and always consider them the last resort in any reporting solution. The negative effect on performance and maintainability is just not worth the easy ride they give the report writer. Nine times out of ten reporting requirements can be met using a little forethought and planning (and a solid understanding of formulas). With that said, there are a few novel ways of using sub reports which will not affect performance and actually prove a boon to the developer.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Need some help implementing VBO's with Frustum Culling

    - by Isracg
    i'm currently developing my first 3D game for a school project, the game world is completely inspired by minecraft (world completely made out of cubes). I'm currently seeking to improve the performance trying to implement vertex buffer objects but i'm stuck, i already have this methods implemented: Frustum culling, only drawing exposed faces and distance culling but i have the following doubts: I currently have about 2^24 cubes in my world, divided in 1024 chunks of 16*16*64 cubes, right now i'm doing immediate mode rendering, which works well with frustum culling, if i implement one VBO per chunk, do i have to update that VBO each time i move the camera (to update the frustum)? is there a performance hit with this? Can i dynamically change the size of each VBO? of do i have to make each one the biggest possible size (the chunk completely filled with objects)?. Would i have to keep each visited chunk in memory or could i efficiently remove that VBO and recreated it when needed?.

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • New Exadata and Exalogic public references

    - by Javier Puerta
    The following are new public references for Exadata and Exalogic: Allegis Accelerates HR Processing for 130,000 Contractors  Oracle customer, Allegis, describes how Oracle Exadata and Oracle Exalogic helped consolidate and optimize critical processes running in Oracle's PeopleSoft.  Hyundai Motor Company Document Cuts Repository Management and Access Times Approximately 85%, Saves More Than US$1 Million in Yearly Printing and Paper Costs The company implemented Oracle Exalogic Elastic Cloud, Oracle Exadata Database Machine, Oracle WebLogic, and Oracle WebCenter Content 11g to ensure high performance and stability for its new document-centralization system  University of Minnesota Reduces Data Center Footprint while Enhancing Performance and Manageability with Oracle Exadata Database Machine   Leading Research Institution Consolidates More Than 200 Databases to Approximately 20 while Maximizing Availability for Thousands of Users SThree Prepares to Triple in Size with a Cloud-Based Architecture and a Consolidated, Stable, and Scalable Global Platform  By consolidating 68 databases into a single Oracle Exadata Database Machine, SThree achieved the stability and scalability it needed to support its growth targets. Further enhancements to the organization’s core systems include a planned upgrade for Siebel Contact Center and improved integration with Oracle Fusion Middleware.

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • Live Webcast: Private Cloud Database Consolidation with Oracle Exadata

    - by kimberly.billings
    Thursday, January 20th, 2011 at 9:00 am PT In this webcast, you'll learn how Oracle Exadata, Oracle Database 11g, and Oracle Real Application Clusters enable you to consolidate multiple applications on clustered server and storage pools to achieve extreme performance and lower your IT costs. You'll also learn how to maximize the efficiencies of private clouds, including: • Multitenancy • Rapid provisioning • Pay-for-use infrastructure Join us for this live Webcast and discover how Oracle Exadata delivers key cloud capabilities, providing elastic database services that can be quickly provisioned on demand. Register today! To learn more about how customers are consolidating on private clouds with Exadata, watch this video about how Commonwealth Bank of Australia consolidated multiple database services, including OLTP applications such as PeopleSoft Financials, onto an Exadata platform for improved performance and resilience and faster time-to-market.

    Read the article

  • How to change the IP address on my Ubuntu virtual NIC

    - by DextrousDave
    I have a physical windows system, on which I run Vmware Workstation. Now on VMware I run Ubuntu 12.10. I am running a webserver on Ubuntu via LAMP, inside VMware. Problem: Now my local IP address for my virtual NIC (the one Ubuntu is using) has an address of 192.168.159.7, and my home router only issues IP addresses of the range 10.0.0.x. So if I want to port forward to the virtual NIC, which has a 192.168 address, I cannot, so my LAMP webserver cannot be accessed externally since the router does not know to send the packets on port 80 to the virtual NIC... How do I fix that? The only way I guess is to assign a 10.0.0.x ip address to the virtual NIC?? But how do I do that? I tried to do it on the host Windows machine with 'Get my IP address automatically' but it issues a 192.168 address every time... Thank you

    Read the article

  • When should an API favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is to use as much of the native domain language as possible in the API. While doing so, I've noticed that there are some cases in the API outline where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for an API design preferring (marginally) more efficient implementations?

    Read the article

  • When should code favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is that the API should be as close to the domain language as possible. While working on the design, I've noticed that there are some cases in the code where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for a design preferring more efficient implementations—even if only marginally so?

    Read the article

  • New Content: Partner News and Workforce Management Special Report

    - by user462779
    Two new bits of content available on Profit Online: Oracle partner Edgewater Ranzal worked with customer High Sierra Energy to integrate Oracle Hyperion Enterprise Performance Management solutions with Oracle E-Business Suite and simplify an increasingly complex financial reporting system. "They needed to eliminate the older processes where 80% of the time was spent on collecting data and only 20% on analyzing the data.” --Bob Sanders, business development manager, Edgewater Ranzal. In a special report about Workforce Management, Profit wraps up a collection of recent content on the subject and looks at Oracle's recent agreement to acquire SelectMinds. “By adding SelectMinds to Oracle’s Talent Management Cloud, Oracle can help customers with a complete talent management solution, enabling streamlined recruiting practices, more quality referrals, faster employee on-boarding, and better performance.” --Thomas Kurian, Executive Vice President, Oracle Development More updates to come as we continue to add content to Profit Online on a regular basis. Thanks for reading!

    Read the article

  • Recent Solaris Studio how-to articles

    - by unixman
    There were a few Oracle Solaris Studio articles published recently, check'em out! -How to Develop Code from a Remote Desktop with Oracle Solaris StudioThis article describes the remote desktop feature of the Oracle Solaris Studio IDE, and how to use it to compile, run, debug, and profile your code running on remote servers.-How to Use Remote Development in the IDEThis article describes the modes of remote development available in the Oracle Solaris Studio 12.3 IDE and how to choose the best one for your development environment.-Performance Tips for the Oracle Solaris Studio IDEThis article describes some tips and tricks to help you improve the performance of the Oracle Solaris Studio IDE.

    Read the article

  • BI&EPM in Focus Oct 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers Iluka Resources Improves Business Insight into Mining Operations Through Significantly Faster, Customized Analyses Banco do Brasil Monitors Budgets in Real Time, Generates Financial Reports In Minutes Instead of Months General Dynamics Improves Budgeting and Planning and Accelerates Rate Changes by Using Integrated Enterprise Performance Management Suite Facebook achieves world-wide automation of financial close task tracking and management of account reconciliations with Oracle Hyperion Financial Close Management (link) Hess Consolidates Multiple SAP General Ledgers with Oracle Hyperion (link) Navistar Leads with Cutting Edge Hyperion Platform, Including HSF, HPCM (link)   Enterprise Performance Management Oct 10: Navistar Leverages DRM (Rolta Solutions) (link) Replay: Integrated Business Planning, Featuring Leggett & Platt (link)   Business Intelligence Report: From Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link | press release) Oct 10: The Top Five Things You Should Know When Migrating from an Old BI Technology to Oracle Business Intelligence Enterprise Edition (perfomance architects) (link)

    Read the article

  • How much configurability to give to users regarding concurrency?

    - by rwong
    This question is a narrowing-down of these related questions: How much effort should we spend to programming for multiple cores? Concurrency: How do you approach the design and debug the implementation? Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency? If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

    Read the article

  • Does anyone know of any work being done on EEE transformer?

    - by Matthew
    I recently got a (few) nexus 7's to install and enjoy ubuntu on. Which is great and all, but from what I've read online and the issues I have experienced myself the Nexus 7 has way to many serious defects. Such as: Audio jack not working Screen lifting Screen ghosting out (The very first one) Instant drop in battery life (happened to one of mine) Internal memory malfunctions (The latest issue I've had, the internal memory went completely bad) If you need to read other horror stories you can simply check out XDA developers forum, lots of people are having issues. I'd really like to enjoy ubuntu on a different device, I think the Transformer prime would make way more sense (usability and stability wise). Have there been any hacks/mods to get it running on this device?

    Read the article

  • HTML5-Canvas: worth using ImpactJS or other framework?

    - by John
    I've been making an HTML5 game without any type of external framework. I haven't found a reason to use one so far. However, there is one thing I'm wondering about. On my Galaxy Nexus, I get about ~40fps. While that would usually be a decent framerate, my game is a rather fast paced game with a gamepad. Because of this, it feels very unsatisfying to play when not capped at 60fps. Are there frameworks out there that can improve performance without toning down on graphics? Or is there something I could do myself without necessarily having to use a framework? I've looked over the basic things such as sticking to integer coordinates, but I didn't see an increase in performance whatsoever? I did some testing with jsperf and results were virtually identical. Does this depend more on the browser?

    Read the article

  • Oracle Honors Hitachi Data Systems with 2012 Taleo Customer Innovation Award

    - by Scott Ewart
    High-Tech Leader Recognized at Taleo World for its Strategic Initiative Aligning Talent, Performance and Revenues Oracle awarded the 2012 Taleo Customer Innovation Award to    Hitachi Data Systems (HDS), a wholly owned subsidiary of Hitachi, Ltd., for transforming performance management within its global sales organization with Oracle Taleo talent management solutions. The Taleo Innovation Awards honor and recognize Oracle Taleo customers that advance talent management initiatives using innovation, leadership and best practices. Oracle honored HDS along with finalists National Heritage Academies and CACI at a ceremony held September 13 at Taleo World in Chicago. Josh Bersin, President and CEO of Bersin & Associates, was the emcee for the ceremony. The honorees were selected from dozens of global submissions by a panel of influential industry analysts with expertise in talent management. To view the full story and press release, click here.

    Read the article

  • Learning low latency C++ and Java?

    - by user997112
    I'm currently in a role where I dont get to write any C++ or Java. However, the role is good because provides me with exposure to the business side (i'm interested in finance). Eventually I would like to get into high frequency trading infrastructure. Therefore, outside of work hours i'd like to maximise the knowledge I can gain about high performance Java and C++. I already have the Java Performance Tuning book, which is ok but not impressive. Can people recommend anymore latency blogs/books/websites for learning about making C++/C/Java or even Unix very fast? Or perhaps making the network parts of the OS (if re-writing Unix components) faster? EDIT: Or perhaps we could make this THE thread for advice on writing fast code

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by [email protected]
    By Brian Dayton on March 19, 2010 10:52 PM Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience. My thoughts brought up more questions than answers such as: Context: - HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? - WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? - WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: - DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? - HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.? Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • 2D fighting bounding boxes

    - by user36420
    I'm prototyping a 2D platformer/brawler game for uni and I'm having some trouble with creating collision/bounding boxes. This is most likely going to end up on a Vita so I do have some library constraints as well as performance implications. None of this has yet been implemented but is all theory. My idea was to have the artist create a sprite sheet for the character animation and then a second identical sprite sheet with the corresponding collisions in a solid colour (e.g green for where the character can be hit and red for dealing damage, near the foot if kicking etc.) With this, I would then parse the collision sheet and generate the various collisions required storing them in the character model. This is the point I feel would be most inefficient. While I think this is a possible solution, I was wondering if there was a more standard way of doing this or a more efficient way as I feel this would have severe performance problems.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >