Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 1/192 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Are "TDD Tests" different to Unit Tests?

    - by asgeo1
    I read this article about TDD and unit testing: http://stephenwalther.com/blog/archive/2009/04/11/tdd-tests-are-not-unit-tests.aspx I think it was an excellent article. The author makes a distinction between what he calls "TDD Tests" and unit testing. They appear to be different tests to him. Previous to reading this article I thought unit tests were a by-product of TDD. I didn't realise you might also create "TDD tests". The author seems to imply that creating unit tests is not enough for TDD as the granularity of a unit test is too small for what we are trying to achieve with TDD. So his TDD tests might test a few classes at once. At the end of the article there is some discussion from the author with some other people about whether there really is a distinction between "TDD Tests" and unit testing. Seems to be some contention around this idea. The example "TDD tests" the author showed at the end of the article just looked like normal MVC unit tests to me - perhaps "TDD tests" vs unit tests is just a matter of semantics? I would like to hear some more opinions on this, and whether there is / isn't a distinction between the two tests.

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • django: failing tests from django.contrib.auth

    - by gruszczy
    When I run my django test I get following errors, that are outside of my test suite: ====================================================================== ERROR: test_known_user (django.contrib.auth.tests.remote_user.RemoteUserCustomTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 160, in test_known_user super(RemoteUserCustomTest, self).test_known_user() File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 67, in test_known_user self.assertEqual(response.context['user'].username, 'knownuser') TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_last_login (django.contrib.auth.tests.remote_user.RemoteUserCustomTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 87, in test_last_login self.assertNotEqual(default_login, response.context['user'].last_login) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_no_remote_user (django.contrib.auth.tests.remote_user.RemoteUserCustomTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 33, in test_no_remote_user self.assert_(isinstance(response.context['user'], AnonymousUser)) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_unknown_user (django.contrib.auth.tests.remote_user.RemoteUserCustomTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 168, in test_unknown_user super(RemoteUserCustomTest, self).test_unknown_user() File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 51, in test_unknown_user self.assertEqual(response.context['user'].username, 'newuser') TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_known_user (django.contrib.auth.tests.remote_user.RemoteUserNoCreateTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 67, in test_known_user self.assertEqual(response.context['user'].username, 'knownuser') TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_last_login (django.contrib.auth.tests.remote_user.RemoteUserNoCreateTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 87, in test_last_login self.assertNotEqual(default_login, response.context['user'].last_login) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_no_remote_user (django.contrib.auth.tests.remote_user.RemoteUserNoCreateTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 33, in test_no_remote_user self.assert_(isinstance(response.context['user'], AnonymousUser)) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_unknown_user (django.contrib.auth.tests.remote_user.RemoteUserNoCreateTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 118, in test_unknown_user self.assert_(isinstance(response.context['user'], AnonymousUser)) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_known_user (django.contrib.auth.tests.remote_user.RemoteUserTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 67, in test_known_user self.assertEqual(response.context['user'].username, 'knownuser') TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_last_login (django.contrib.auth.tests.remote_user.RemoteUserTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 87, in test_last_login self.assertNotEqual(default_login, response.context['user'].last_login) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_no_remote_user (django.contrib.auth.tests.remote_user.RemoteUserTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 33, in test_no_remote_user self.assert_(isinstance(response.context['user'], AnonymousUser)) TypeError: 'NoneType' object is unsubscriptable ====================================================================== ERROR: test_unknown_user (django.contrib.auth.tests.remote_user.RemoteUserTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/remote_user.py", line 51, in test_unknown_user self.assertEqual(response.context['user'].username, 'newuser') TypeError: 'NoneType' object is unsubscriptable ====================================================================== FAIL: test_current_site_in_context_after_login (django.contrib.auth.tests.views.LoginTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/contrib/auth/tests/views.py", line 190, in test_current_site_in_context_after_login self.assertEquals(response.status_code, 200) AssertionError: 302 != 200 Could anyone explain me, what am I doing wrong or what I should set to get those tests pass?

    Read the article

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • What are tangible advantages to proper Unit Tests over Functional Test called unit tests

    - by Jackie
    A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to "run" the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective?

    Read the article

  • L'excès de tests unitaires nuirait au développement agile, ils seraient favorisés par rapport aux tests d'intégration

    L'excès de tests unitaires nuirait au développement agile Ils seraient favorisés par rapport aux tests d'intégrationBien souvent, le développement agile mise sur le développement piloté par les tests (TDD). Aujourd'hui, Mark Balbes, un des membres les plus éminents de Asynchrony Solutions et expert en développement logiciel et en gestion de projet agile, nous livre sa vision des faits en ce qui concerne le TDD.L'expert estime qu'actuellement, le développement agile use excessivement du TDD, les...

    Read the article

  • Naming your unit tests

    - by kerry
    When you create a test for your class, what kind of naming convention do you use for the tests? How thorough are your tests? I have lately switched from the conventional camel case test names to lower case letters with underscores. I have found this increases the readability and causes me to write better tests. A simple utility class: public class ArrayUtils { public static T[] gimmeASlice(T[] anArray, Integer start, Integer end) { // implementation (feeling lazy today) } } I have seen some people who would write a test like this: public class ArrayUtilsTest { @Test public void testGimmeASliceMethod() { // do some tests } } A more thorough and readable test would be: public class ArrayUtilsTest { @Test public void gimmeASlice_returns_appropriate_slice() { // ... } @Test public void gimmeASlice_throws_NullPointerException_when_passed_null() { // ... } @Test public void gimmeASlice_returns_end_of_array_when_slice_is_partly_out_of_bounds() { // ... } @Test public void gimmeASlice_returns_empty_array_when_slice_is_completely_out_of_bounds() { // ... } } Looking at this test, you have no doubt what the method is supposed to do. And, when one fails, you will know exactly what the issue is.

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Colleague unwilling to use unit tests "as it's more to code"

    - by m.edmondson
    A colleague is unwilling to use unit tests and instead opting for a quick test, pass it to the users, and if all is well it is published live. Needless to say some bugs do get through. I mentioned we should think about using unit tests - but she was all against it once it was realised more code would have to be written. This leaves me in the position of modifying something and not being sure the output is the same, especially as her code is spaghetti and I try to refactor it when I get a chance. So whats the best way forward for me?

    Read the article

  • Unit Testing in QTestLib - running single test / tests in class / all tests

    - by Dave
    I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button. All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes. I'm sure I'm missing something. I'd like to be able to easily: Run a single test method Run the tests in an entire class Run all tests Any of those would call the appropriate setup / teardown functions.

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • Splitting a test to a set of smaller tests

    - by mkorpela
    I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way. I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test. An example of what I am aiming at: //Class under test class A { public void setB(B b){ this.b = b; } public Output process(Input i){ return b.process(doMyProcessing(i)); } private InputFromA doMyProcessing(Input i){ .. } .. } //Another class under test class B { public Output process(InputFromA i){ .. } .. } //The Big Test @Test public void theBigTest(){ A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive Input i = createInput(); Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive assertEquals(o, expectedOutput()); } //The splitted tests @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest1(){ // this method is a bit too long but its just an example.. Input i = createInput(); InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B b = mock(B.class); when(b.process(x)).thenReturn(expected); A classUnderTest = createInstanceOfClassA(); classUnderTest.setB(b); Output o = classUnderTest.process(i); assertEquals(o, expected); verify(b).process(x); verifyNoMoreInteractions(b); } @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest2(){ InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B classUnderTest = createInstanceOfClassB(); Output o = classUnderTest.process(x); assertEquals(o, expected); }

    Read the article

  • Why don't xUnit frameworks allow tests to run in parallel?

    - by Xavier Nodet
    Do you know of any xUnit framework that allows to run tests in parallel, to make use of multiple cores in today's machine? I don't... If none (or so few) of them does it, maybe there is a reason... Is it that tests are usually so quick that people simply don't feel the need to paralellize them? Is there something deeper that precludes distributing (at least some of) the tests over multiple threads? Thanks!

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • How to solve timing problems in automated UI tests with C# and Visual Studio?

    - by Lernkurve
    Question What is the standard approach to solve timing problems in automated UI tests? Concrete example I am using Visual Studio 2010 and Team Foundation Server 2010 to create automated UI tests and want to check whether my application has really stopped running: [TestMethod] public void MyTestMethod() { Assert.IsTrue(!IsMyAppRunning(), "App shouldn't be running, but is."); StartMyApp(); Assert.IsTrue(IsMyAppRunning(), "App should have been started and should be running now."); StopMyApp(); //Pause(500); Assert.IsTrue(!IsMyAppRunning(), "App was stopped and shouldn't be running anymore."); } private bool IsMyAppRunning() { foreach (Process runningProcesse in Process.GetProcesses()) { if (runningProcesse.ProcessName.Equals("Myapp")) { return true; } } return false; } private void Pause(int pauseTimeInMilliseconds) { System.Threading.Thread.Sleep(pauseTimeInMilliseconds); } StartMyApp() and StopMyApp() have been recorded with MS Test Manager 2010 and reside in UIMap.uitest. The last assert fails because the assertion is executed while my application is still in the process of shutting down. If I put a delay after StopApp() the test case passes. The above is just an example to explain my problem. What is the standard approach to solve these kinds of timing issues? One idea would be to wait with the assertion until I get some event notification that my app has been stopped.

    Read the article

  • Separate Action from Assertion in Unit Tests

    - by DigitalMoss
    Setup Many years ago I took to a style of unit testing that I have come to like a lot. In short, it uses a base class to separate out the Arrangement, Action and Assertion of the test into separate method calls. You do this by defining method calls in [Setup]/[TestInitialize] that will be called before each test run. [Setup] public void Setup() { before_each(); //arrangement because(); //action } This base class usually includes the [TearDown] call as well for when you are using this setup for Integration tests. [TearDown] public void Cleanup() { after_each(); } This often breaks out into a structure where the test classes inherit from a series of Given classes that put together the setup (i.e. GivenFoo : GivenBar : WhenDoingBazz) with the Assertions being one line tests with a descriptive name of what they are covering [Test] public void ThenBuzzSouldBeTrue() { Assert.IsTrue(result.Buzz); } The Problem There are very few tests that wrap around a single action so you end up with lots of classes so recently I have taken to defining the action in a series of methods within the test class itself: [Test] public void ThenBuzzSouldBeTrue() { because_an_action_was_taken(); Assert.IsTrue(result.Buzz); } private void because_an_action_was_taken() { //perform action here } This results in several "action" methods within the test class but allows grouping of similar tests (i.e. class == WhenTestingDifferentWaysToSetBuzz) The Question Does someone else have a better way of separating out the three 'A's of testing? Readability of tests is important to me so I would prefer that, when a test fails, that the very naming structure of the tests communicate what has failed. If someone can read the Inheritance structure of the tests and have a good idea why the test might be failing then I feel it adds a lot of value to the tests (i.e. GivenClient : GivenUser : WhenModifyingUserPermissions : ThenReadAccessShouldBeTrue). I am aware of Acceptance Testing but this is more on a Unit (or series of units) level with boundary layers mocked. EDIT : My question is asking if there is an event or other method for executing a block of code before individual tests (something that could be applied to specific sets of tests without it being applied to all tests within a class like [Setup] currently does. Barring the existence of this event, which I am fairly certain doesn't exist, is there another method for accomplishing the same thing? Using [Setup] for every case presents a problem either way you go. Something like [Action("Category")] (a setup method that applied to specific tests within the class) would be nice but I can't find any way of doing this.

    Read the article

  • Learning about tests for junior programmers

    - by RHaguiuda
    I`m not sure if its okay to ask it on stackoverflow. Ive been reading a log about tests, unit tests, tests frameworks, mocks and so on, but as a junior programmer I dont know anything about tests, not even where to start! Can anyone explain to young programmers about tests, how they`re run, where and what to test, what is unit testing, integration testing, automated tests? How much to test? And more important: how much test is enough? I belive this would be very helpfull. If possible indicate a few books too about these subjects. Thanks

    Read the article

  • Disable selected automated tests at runtime

    - by squig
    Is is posable to disable selected automated tests at runtime? I'm using VSTS and rhino mocks and have some intergation tests that require an external dependancy to be installed (MQ). Not all the developers on my team have this installed. Currently all the tests that require MQ inherit from a base class that checks if MQ is installed and if is not sets the test result to inconclusive. This works as it stops the tests from running, but marks the test run as unsuccseessful and can hide other failures. Any ideas?

    Read the article

  • Integration tests in Continuous Integration environment: Database and filesystem state

    - by dario_ramos
    I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly. Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome. What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first. Any alternative approach is welcome.

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Creating parallel selenium tests in C# and using Nunit as the runner application

    - by damianmartin
    I am writing a new test suite for the company to test a very complex ASP.NET application which is heavily AJAX driven. We have decided to use Selenium (Grid & Remote Control) and Nunit to run these tests. The actually tests are dynamically created at run time from a spreadsheet. Each Column in an excel spreadsheet relates to a new test and each row relates to a selenium command (but in plain English and the dll converts this into Selenium code). My problem i have at the moment is getting the tests running in parallel. There will be 1000+ tests so it is too time consuming to have 1 test run at a time. Selenium Grid and Selenium Remote Control(s) are setup correctly because I can run there demo. From what i have read I need to use Punit but i can not find any documentation on what a test in punit should look like. Nunit tests are [SetUp] [TearDown] [Test]. Can anyone point me in the right direction. Thanks in advance.

    Read the article

  • How to configure Visual Studio 2010 code coverage for ASP.NET MVC unit tests

    - by DigiMortal
    I just got Visual Studio 2010 code coverage work with ASP.NET MVC application unit tests. Everything is simple after you have spent some time with forums, blogs and Google. To save your valuable time I wrote this posting to guide you through the process of making code coverage work with ASP.NET MVC application unit tests. After some fighting with Visual Studio I got everything to work as expected. I am still not very sure why users must deal with this mess, but okay – I survived it. Before you start configuring Visual Studio I expect your solution meets the following needs: there are at least one library that will be tested, there is at least on library that contains tests to be run, there are some classes and some tests for them, and, of course, you are using version of Visual Studio 2010 that supports tests (I have Visual Studio 2010 Ultimate). Now open the following screenshot to separate windows and follow the steps given below. Visual Studio 2010 Test Settings window. Click on image to see it at original size.  Double click on Local.testsettings under Solution Items. Test settings window will be opened. Select “Data and Diagnostics” from left pane. Mark checkboxes “ASP.NET Profiler” and “Code Coverage”. Move cursor to “Code Coverage” line and press Configure button or make double click on line. Assemblies selection window will be opened. Mark checkboxes that are located before assemblies about what you want code coverage reports and apply settings. Save your project and close Visual Studio. Run Visual Studio as Administrator and run tests. NB! Select Test => Run => Tests in Current Context from menu. When tests are run you can open code coverage results by selecting Test => Windows => Code Coverage Results from menu. Here you can see my example test results. Visual Studio 2010 Test Results window. All my tests passed this time. :) Click on image to see it at original size.  And here are the code coverage results. Visual Studio 2101 Code Coverage Results. I need a lot more tests for sure. Click on image to see it at original size.  As you can see everything was pretty simple. But it took me sometime to figure out how to get everything work as expected. Problems? You may face some problems when making code coverage work. Here is my short list of possible problems. Make sure you have all assemblies available for code coverage. In some cases it needs more libraries to be referenced as you currently have. By example, I had to add some more Enterprise Library assemblies to my project. You can use EventViewer to discover errors that where given during testing. Make sure you selected all testable assemblies from Code Coverage settings like shown above. Otherwise you may get empty results. Tests with code coverage are slower because we need ASP.NET profiler. If your machine slows down then try to free more resources.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >