Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 46/763 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • 3 Day Level 400 SQL Tuning Workshop 15 March in London, early bird and referral offer

    - by sqlworkshops
    I want to inform you that we have organized the "3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop" in London, United Kingdom during March 15-17, 2011.This is a truly level 400 hands-on workshop and you can find the Agenda, Prerequisite, Goal of the Workshop and Registration information at www.sqlworkshops.com/ruk. Charges are GBP 1800 (VAT excl.). Early bird discount of GBP 125 until 18 February. We are also introducing a new referral plan. If you refer someone who participates in the workshop you will receive an Amazon gift voucher for GBP 125.Feedback from one of the participants who attended our November London workshop:Andrew, Senior SQL Server DBA from UBS, UK, www.ubs.com, November 26, 2010:Rating: In a scale of 1 to 5 please rate each item below (1=Poor & 5=Excellent) Overall I was satisfied with the workshop 5 Instructor maintained the focus of the course 5 Mix of theory and practice was appropriate 5 Instructor answered the questions asked 5 The training facility met the requirement 5 How confident are you with SQL Server 2008 performance tuning 5 Additional comments from Andrew: The course was expertly delivered and backed up with practical examples. At the end of the course I felt my knowledge of SQL Server had been greatly enhanced and was eager to share with my colleagues. I felt there was one prerequisite missing from the course description, an open mind since the course changed some of my core product beliefs. For Additional workshop feedbacks refer to: www.sqlworkshops.com/feedbacks.I will be delivering the Level 300-400 1 Day Microsoft SQL Server 2008 Performance Monitoring and Tuning Seminar at Istanbul and Ankara, Turkey during March. This event is organized by Microsoft Turkey, let me know if you are in Turkey and would like to attend.During September 2010 I delivered this Level 300-400 1 Day Microsoft SQL Server 2008 Performance Monitoring and Tuning Seminar in Zurich, Switzerland organized by Microsoft Switzerland and the feedback was 4.85 out of 5, there were about 100 participants. During November 2010 when I delivered seminar in Lisbon, Portugal organized by Microsoft Portugal, the feedback was 8.30 out of 9, there were 130 participants.Our Mission: Empower customers to fully realize the Performance potential of Microsoft SQL Server without increasing the total cost of ownership (TCO) and achieve high customer satisfaction in every consulting engagement and workshop delivery.Our Business Plan: Provide useful content in webcasts, articles and seminars to get visibility for consulting engagements and workshop delivery opportunity. Help us by forwarding this email to your SQL Server friends and colleagues.Looking forwardR Meyyappan & Team @ www.SQLWorkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Performance Improvement: Session State

    Performance is critical to today's successful applications and web sites. If you design with an awareness of the session state management challenges you can always change your strategies to match your performance needs.

    Read the article

  • Performance Improvement: Session State

    Performance is critical to today's successful applications and web sites. If you design with an awareness of the session state management challenges you can always change your strategies to match your performance needs.

    Read the article

  • FREE eBook: .NET Performance Testing and Optimization (Part 1)

    In this this first part of complete guide to performance profiling, Paul Glavich and Chris Farrell explain why performance testing is a good idea and walk you through everything you need to know to set up a test environment. This comprehensive guide to getting started is an essential handbook to any programmer looking to set up a .NET testing environment and get the best results out of it. Download your free copy now span.fullpost {display:none;}

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • Google I/O 2012 - Building High Performance Mobile Web Applications

    Google I/O 2012 - Building High Performance Mobile Web Applications Ryan Fioravanti Learn what it takes to build an HTML5 mobile app that will wow your users. This session will focus on speed, offline support, UI layouts, and the tools necessary to set up a productive development environment. Come to this session if you're looking to make a killer mobile web app that stands out amongst the competition. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 33 0 ratings Time: 49:43 More in Science & Technology

    Read the article

  • Tester la performance de votre réseau avec Iperf, un tutoriel par Nicolas Hennion

    Bonjour à tous !La rubrique Réseaux vous propose un article expliquant comment tester les performances du réseau avec Iperf par nicolargo : Tester la performance de votre réseau avec Iperf. Citation: Iperf est un des outils indispensables pour tout administrateur réseau qui se respecte. En effet, ce logiciel de mesure de performance réseau, disponible sur de nombreuses plateformes (Linux, BSD, Mac, Windows?) se présente sous la forme d'une ligne de commande à exécuter sur deux machines...

    Read the article

  • A Perspective on Database Performance Tuning

    Fundamentally, database performance tuning is done for two basic reasons, to reduce response time and to reduce resource usage, both of which can apply for any given situation. Julian Stuhler looks at database performance tuning, and why it remains one of the most important topics for any DBA, developer or systems administrator.

    Read the article

  • Talend vs. SSIS: A Simple Performance Comparison

    With all of the ETL tools in the marketplace, which one is best? Jeff Singleton brings us simple performance comparison pitting SSIS against open source powerhouse Talend. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • Data Quality Services Performance Best Practices Guide

    This guide details high-level performance numbers expected and a set of best practices on getting optimal performance when using Data Quality Services (DQS) in SQL Server 2012 with Cumulative Update 1. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • A Perspective on Database Performance Tuning

    Fundamentally, database performance tuning is done for two basic reasons, to reduce response time and to reduce resource usage, both of which can apply for any given situation. Julian Stuhler looks at database performance tuning, and why it remains one of the most important topics for any DBA, developer or systems administrator.

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Does Windows performance degrade past a certain level of CPU utilization?

    - by Mike Taylor
    Is there a recommended average CPU threshold in running Windows boxes based on experience in other shops? Background: We are running with Windows Server 2003 32-bit OS. Servers are handling a major enterprise-level web application suite with a high frequency of small transactions mixed in with much larger transactions - overall average is 13ms. Our average overall CPU utilization of the Windows servers are ~60% during prime-shift. And we question at what level does the Windows OS begin to shimmy on the CPU scheduling road? Thanks.

    Read the article

  • Will using FAT32 provide better pagefile performance than NTFS?

    - by llazzaro
    Hello, I was discussing with my others personalities, and came up with a conflict. In http://technet.microsoft.com/en-us/library/cc938440.aspx , says that FAT32 is faster when using smaller volumes. Ok separate disk, will give more performance than same disk. But did anyone test this? Scenario 1 : Separate hard disk FAT32 (small volume) Scenario 2 : Separate hard disk NTFS which one will win? minimum gain?

    Read the article

  • High Performance Storage Systems for SQL Server

    Rod Colledge turns his pessimistic mindset to storage systems, and describes the best way to configure the storage systems of SQL Servers for both performance and reliability. Even Rod gets a glint in his eye when he then goes on to describe the dazzling speed of solid-state storage, though he is quick to identify the risks....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Profiling SharePoint with ANTS Performance Profiler 5.2

    Using ANTS Performance Profiler with SharePoint has, previously, been possible, but not easy. Version 5.2 of ANTS Performance Profiler changes all that, and Chris Allen has put together a straight-forward guide to profiling SharePoint, demonstrating just how much easier it has become.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >