Search Results

Search found 10331 results on 414 pages for 'stress testing'.

Page 183/414 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • ASP.NET 3.5 Debugging Using Visual Web Developer Express 2008

    One of the most important features in Visual Web Developer Express 2 8 in developing ASP.NET 3.5 websites is the debugging feature. Having a debugger is important in troubleshooting source code and application-related problems. It will save you a lot of time if you encounter and fix problems during the design and testing stage. This article is all about basic debugging in ASP.NET using Visual Web Developer Express its information will provide you with an important tool for designing and creating ASP.NET websites.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Is gstreamer the best encoder for vorbis or is there a better encoding engine I should use?

    - by sayth
    I have sound juicer installed and I want to rip to vorbis.ogg. Is gstreamer the best encoder for vorbis or is there a better encoding engine I should use. The default gstreamer profile is audio/x-raw-float,rate=44100,channels=2 ! vorbisenc name=enc quality=0.5 ! oggmux I am going to raise the quality to 0.7 but thats all nothing if gstreamer isn't the best encoder. Any suggestions for high quality ripping? Edit: a good answer to this will also be the top search result in google for "best vorbis encoding engine". Double Edit: It appears oggenc itself is the best encoder which rules out using sound juicer to rip cd's as it uses gstreamer. I have installed oggenc and am testing the command ripper abcde. Found a good configuration for it here oggenc config for abcde

    Read the article

  • Matrix Pattern Recognition Algorithm

    - by Andres
    I am designing a logic analyzer and I would like to implement some Matrix Algorithm. I have several channels each one represented by a row in the matrix and every element in the column would be the state, for example: Channel 1 1 0 0 1 0 1 1 0 1 Channel 2 1 1 0 1 1 0 0 1 1 Channel 3 0 1 0 1 1 0 1 0 0 Channel 4 0 0 1 0 0 1 0 0 1 I would like to detect a pattern inside my matrix for example, detect if exist and where the sub-matrix or pattern: 1 0 1 1 I think it can be accomplished testing element by element but I think there should be a better way of doing it. Is there any Java API or any way to do it ? If there is a API ARM optimized for NEON instructions would be great also but not mandatory. Thank you very much in advance.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Transparent Data Encryption Helps Customers Address Regulatory Compliance

    - by Troy Kitch
    Regulations such as the Payment Card Industry Data Security Standards (PCI DSS), U.S. state security breach notification laws, HIPAA HITECH and more, call for the use of data encryption or redaction to protect sensitive personally identifiable information (PII). From the outset, Oracle has delivered the industry's most advanced technology to safeguard data where it lives—in the database. Oracle provides a comprehensive portfolio of security solutions to ensure data privacy, protect against insider threats, and enable regulatory compliance for both Oracle and non-Oracle Databases. Organizations worldwide rely on Oracle Database Security solutions to help address industry and government regulatory compliance. Specifically, Oracle Advanced Security helps organizations like Educational Testing Service, TransUnion Interactive, Orbitz, and the National Marrow Donor Program comply with privacy and regulatory mandates by transparently encrypting sensitive information such as credit cards, social security numbers, and personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides organizations the most cost-effective solution for comprehensive data protection. Watch the video and learn why organizations choose Oracle Advanced Security with transparent data encryption.

    Read the article

  • Do private static methods in C# hurt anything?

    - by fish
    I created a private validation method for a certain validation that happens multiple times in my class (I can't store the validated data for various reasons). Now, ReSharper suggests that the function could be made static. I'm a little reluctant to do so due known problems with static methods. It would be a private static method. My question is, can private static methods cause similar coupling and testing problems like public static methods? Is it a bad practice? I would guess not, but I'm not sure if there is a pitfall here.

    Read the article

  • SSIS Expression Tester Tool

    - by Davide Mauri
    Thanks to my friend's Doug blog I’ve found a very nice tool made by fellow MVP Darren Green which really helps to make SSIS develoepers life easier: http://expressioneditor.codeplex.com/Wikipage?ProjectName=expressioneditor In brief the tool allow the testing of SSIS Expression so that one can evaluate and test them before using in SSIS packages. Cool and useful! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Don't Use Static? [closed]

    - by Joshiatto
    Possible Duplicate: Is static universally “evil” for unit testing and if so why does resharper recommend it? Heavy use of static methods in a Java EE web application? I submitted an application I wrote to some other architects for code review. One of them almost immediately wrote me back and said "Don't use "static". You can't write automated tests with static classes and methods. "Static" is to be avoided." I checked and fully 1/4 of my classes are marked "static". I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. He went on to mention something involving mocking, IOC/DI techniques that can't be used with static code. He says it is unfortunate when 3rd party libraries are static because of their un-testability. Is this other architect correct?

    Read the article

  • Temporarily share/deploy a python (flask) application

    - by Jeff
    Goal Temporarily (1 month?) deploy/share a python (flask) web app without expensive/complex hosting. More info I've developed a basic mobile web app for the non-profit I work for. It's written in python and uses flask as its framework. I'd like to share this with other employees and beta testers (<25 people). Ideally, I could get some sort of simple hosting space/service and push regular updates to it while we test and iterate on this app. Think something along the lines of dropbox, which of course would not work for this purpose. We do have a website, and hosting services for it, but I'm concerned about using this resource as our website is mission critical and this app is very much pre-alpha at this point. Options I've researched / considered Self host from local machine/network (slow, unreliable) Purchase hosting space (with limited non-profit resources, I'm concerned this is overkill) Using our current web server / hosting (not appropriate for testing) Thanks very much for your time!

    Read the article

  • Showing schema.org aggregate rating in Google rich snippets

    - by nickh
    After adding schema.org microdata markup for reviews and aggregate ratings, I expected review and rating information to show up in rich snippets. Unfortunately, neither are being shown. Google's Structured Data Testing Tool finds the microdata, and there're no errors or warnings on the page. Any idea what's wrong with the microdata markup? Example 1: Live Page: http://www.shelflife.net/ljn-thundercats-series-3/bengali Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fljn-thundercats-series-3%2Fbengali Example 2: Live Page: http://www.shelflife.net/star-wars-mighty-muggs/asajj-ventress Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fstar-wars-mighty-muggs%2Fasajj-ventress

    Read the article

  • QotD: Roger Yeung on Oracle's Java Uninstall Applet

    - by $utils.escapeXML($entry.author)
    We have a build of an Applet that will assist in the removal of older versions of the JRE. The Applet is available for testing on http://java.com/uninstall-tool . At this stage the Applet only targets the Windows platform, as it represents the largest installed base and the need for platform specific elements made Windows the logical starting point. We are deliberately not giving documentation on how to use the applet - we want feedback of the tool standing on its own.The intent of making this build available is to gather feedback; ideas, suggestions, comments, good and bad, what works, what does not work, what could be improved, etc. Please try it out and give us feedback to ensure a smooth release.Roger Yeung in a post with more details on providing feedback.

    Read the article

  • Oracle VM Templates Enable Oracle Real Application Clusters Deployment Up to 10 Times Faster than VMware vSphere

    - by Angela Poth
    Oracle VM - Quantifying the Value of Application-Driven DeploymentThe recently published report by the Evaluator Group, “Oracle VM – Quantifying The Value of Application-Driven Virtualization,“ validates Oracle’s ease of use and time savings over VMware vSphere 5. The report found that users can deploy Oracle Real Application Clusters up to 10 times faster with Oracle VM templates than VMware vSphere. Using Oracle VM templates users can deploy E-Business Suites nearly 7 times faster than VMware vSphere."Oracle VM’s application-driven architecture was built to enable rapid deployment of enterprise applications, with simplified integrated lifecycle management, to fully support Oracle applications. Our testing has demonstrated 90 percent improvement deploying Oracle Real Application Clusters 11g R2 using Oracle VM Templates and a similar improvement of 85 percent for the Oracle E-Business Suite 12.1.1. This clearly shows that Templates can provide real value to customers and should become an essential component for enabling rapid enterprise application deployment and management," said Leah Schoeb, Senior Partner, Evaluator Group.Read the Press Release Get a copy of the Evaluator Group Report: Oracle VM – Quantifying The Value of Application-Driven Virtualization

    Read the article

  • Tester that doesn't test

    - by George
    What should I do about a tester that does not test? We have a complicated dry run scenario, that takes a lot of time to execute. Mostly this tester will execute it's tests in very slow way...checking emails, internet, etc. He reports just a few bugs, but! Whenever the official dry-run begins (these are logged with testlink) the tester starts to open new bugs that where not discovered before. Is he not doing his job correctly? Or am I just overlooking how tests work? I'm not his supervisor, but he is testing code that I wrote.

    Read the article

  • Which simple Java JPA ORM tool to use ?

    - by Guillaume
    What Java ORM library implementing JPA that match following criteria would you recommend and why ? free & open source alive (at least bug fixes and a mailing list) with good documentation simple not hibernate (this one is well-known :-) ) I need to select a simple ORM tool that can be set up in minutes without and easy to understand for setting simple CRUD daos. A query builder ill be an interesting plus. Later I can have to move to Hibernate, that's why being JPA compliant is a must. I have found some candidates on the web, but not so much feedback, so I will gladly take your advices on the topic. ----- EDIT --------- I have been successfully testing ebean/avaje with a small test cases. Any one has a feedback on using this tool in production ?

    Read the article

  • In which cases Robolectric is a relevant solution?

    - by Francis Toth
    As you may now, Robolectric is a framework that provides stubs for Android objects, in order to make tests runnable outside the Dalvik environment. My concern is that, by doing this, one can fake a third party library, which is, I believe, not a good practice (it should be encapsulated instead). If you make assumptions about an interface you don't own, which is changed once your test has been written, you won't be always noticed about the modifications. This can lead to a misunderstanding between your implementations and the interface they depends on. In addition, Android use mostly inheritance over interfaces which limits contract testing. So here's my question: Are there situations when Robolectric is the way to go? Here are some links you can check for further information: test-doubles-with-mockito in-brief-contract-tests

    Read the article

  • QotD - Nicolas de Loof on AdoptOpenJDK

    - by $utils.escapeXML($entry.author)
    The AdoptOpenJDK program is an initiative to get as many Java users as possible to try the OpenJDK 8 preview builds, so that feedback is collected before JDK 8 is officially released. There are many ways to contribute to this program (as explained on the wiki), but the most basic one is to start testing your own project on the Java 8 platform. CloudBees can help you there, as we just made OpenJDK 8 (preview) available on DEV@cloud so that you can configure a build job to check project compatibility. We will upgrade the JDK for all recent preview builds until JDK 8 is finalNicolas de Loof, Support Engineer at Cloudbees in a blog post on AdoptOpenJDK.

    Read the article

  • Webcast Replay Available: Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast Scrambling Sensitive Data in EBS 12 Cloned Environments (Presentation) Eric Bing, Senior Director, Jagan Athreya, Enterprise Manager Product Management, and Elke Phelps, Senior Principal Product Manager, discussed the Oracle E-Business Suite Template for Data Masking Pack, and how it can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. (July 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Tab Sweep - More OSGi, Coherence, Oracle Java moves, JMS 2.0 and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Why I will use Java EE (JEE, and not J2EE) instead of Spring in new Enterprise Java Projects (Kai) • What is Happening vs. What is Interesting (Geertjan) • Oracle Coherence & Oracle Service Bus: REST API Integration (Nino) • Oracle's Top 10 Java Moves of 2011 (eWeek) • JEP 122: Remove the Permanent Generation (OpenJDK.org) • JEE6 – Glassfish 3.1, Clustering & Failover (Xebia.fr) • Testing LAZY mechanism in EJB 3 (e-blog-java) • Discoing with Vorpal (Chuk) • Devoxx : les évolutions de JMS 2.0 (Ippon.fr) • More OSGi... (Jarda) • Practical Migration to Java 7 - Small Codeexamples (FOSSLC) • Coherence Part III : Filtres (Zenika.com)

    Read the article

  • Need clarification concerning Windows Azure

    - by SnOrfus
    I basically need some confirmation and clarification concerning Windows Azure with respect to a Silverlight application using RIA Services. In a normal Silverlight app that uses RIA services you have 2 projects: App App.Web ... where App is the default client-side Silverlight and app.web is the server-side code where your RIA services go. If you create a Windows Azure app and add a WCF Web Services Role, you get: App (Azure project) App.Services (WCF Services project) In App.Services, you add your RIA DomainService(s). You would then add another project to this solution that would be the client-side Silverlight that accesses the RIA Services in the App.Services project. You then can add the entity model to the App.Services or another project that is referenced by App.Services (if that division is required for unit testing etc.) and connect that entity model to either a SQLServer db or a SQLAzure instance. Is this correct? If not, what is the general 'layout' for building an application with the following tiers: UI (Silverlight 4) Services (RIA Services) Entity/Domain (EF 4) Data (SQL Server)

    Read the article

  • Load Balancer impact on web development

    - by confusedGeek
    This question has it's roots in a SharePoint site that I am help with. Background on the issue I dealt with: The dev box and integration server are not setup behind a load balancer. The links were being built using the HttpRequest.Url value from the current context. Note that the links weren't relative links but full URIs. Once we deployed to testing (which has a LB, amongst other things) we received errors on the links being built since the server had an address of "http://some.site.org:999" while the address at the LB as "https://site.org" (SSL was off-loaded at the LB). The fix was easy enough by using relative URIs. The Question: Since this is the first site I've worked with that's behind a Load Balancer on I'm wondering if there are other gotcha's that I need to consider when developing a site behind one?

    Read the article

  • Junior software developer - How to understand web applications in depth?

    - by nat_gr
    I am currently a junior developer in web applications and specifically in ASP.NET MVC technology. My problem is that the C# senior developer in the company has no experience with this technology and I try to learn without any guidance. I went through all tutorials (e.g music store), codeplex projects and also read Pro ASP.NET MVC 4. However, most of the examples are about CRUD and e-commerce applications. What I don't understand is how dependency injection fits in web applications (I have realized that is not only used for facilitating unit testing) or when I should use a custom model binder or how to model the business logic when there is already a database schema in place. I read the forum quite often and it would very helpful if some experienced developer could give me an insight about how to proceed. Do I need to read some books to understand the overall idea behind web applications? And what kind of application should I start building myself - I don't think it would be useful to create similar examples with the tutorials.

    Read the article

  • Upgrade 11g szeminárium

    - by Lajos Sárecz
    Június 9-én az Oracle Database 11g Upgrade-rol szóló szemináriumot tartunk Mike Dietrich közremuködésével Budapesten! Ha valaki nem ismerné még Mike-ot és Oracle Database upgrade-et tervez, akkor épp itt az ideje hogy megismerje. Erre pedig kiváló alkalom a rendezvény június 9-én, Mike ugyanis az Oracle legfobb upgrade szakértoje. Számos upgrade szemináriumot tart, és nem utolsó sorban van egy kiváló blogja errol a témáról: http://blogs.oracle.com/UPGRADE/ Az esemény fókuszában az upgrade tippek&trükkök bemutatása, valamint az upgrade közben felmerülo buktatók elkerülésének ismertetése lesz. A szeminárium során áttekintést adunk az Oracle Database 11gR2 upgrade folyamatáról és a szükséges elokészíto lépésekrol. A nap során tárgyalni fogjuk a minimális állásidovel végrehajtható upgrade stratégiákat, és kiemelten foglalkozunk majd a teljesítmény hangolás módjával, felhasználva az SQL Plan Management-et és a Real Application Testing két funkcióját: az SQL Performance Analyzer-t, illetve a Database Replay-t. Befejezésként néhány ügyfél tapasztalatait fogjuk megosztani Önökkel. Helyszín a Ramada Plaza Budapest lesz, ahol minden kedves ügyfelünket és partnerünket sok szeretettel várunk. Regisztrálni a rendezvény weboldalán lehetséges.

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • Thoughts of Cloud Development/Google App Engine

    - by jiewmeng
    I use mainly PHP for web development, but recently, I started thinking about using Google App Engine. It doesn't use PHP which I am already familiar with, so there will be a steeper learning curve. Probably using Python/Django. But I think it maybe worthwhile. Some advantages I see: Focus on App/Development. No need to setup/maintain server ... no more server configs Scales automatically Pay for what you use. Free for low usage Reliable, it's Google after all Some concerns though: Does database with no joins pose a problem for those who used App Engine before? Do I have to upload to Google just to test? Will it be slow compared to testing locally? What are your thoughts and opinions? Why would you use or not use App Engine?

    Read the article

  • Implementing Oracle Exadata for Oracle Utilities Customer Care And Billing

    - by ACShorten
    In association with our performance team, a new whitepaper has been released for Oracle Utilities Customer Care And Billing that outlines the best practices for using Oracle Exadata with that product. The advice in the whitepaper is based upon certification and performance testing performed by our internal performance teams to assit in sites implementing the database component of Oracle Utilities Customer Care And Billing on an Oracle Exadata platform. It is recommended that the contents of this whitepaper be used alongside existing best practices for the Oracle Exadata platform. The whitepaper is available from My Oracle Support under Implementing Oracle Exadata with Oracle Utilities Customer Care and Billing (Dod Id: 1486886.1)

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >