Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 7/192 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • Should developers be responsible for tests other than unit tests, if so which ones are the most common?

    - by Jackie
    I am currently working on a rather large project, and I have used JUnit and EasyMock to fairly extensively unit test functionality. I am now interested in what other types of testing I should worry about. As a developer is it my responsibility to worry about things like functional, or regression testing? Is there a good way to integrate these in a useable way in tools such as Maven/Ant/Gradle? Are these better suited for a Tester or BA? Are there other useful types of testing that I am missing?

    Read the article

  • iOS - Unit tests for KVO/delegate codes

    - by ZhangChn
    I am going to design a MVC pattern. It could be either designed as a delegate pattern, or a Key-Value-Observing(KVO), to notify the controller about changing models. The project requires certain quality control procedures to conform to those verification documents. My questions: Does delegate pattern fit better for unit testing than KVO? If KVO fits better, would you please suggest some sample codes?

    Read the article

  • Food For Tests: 7u12 Build b05, 8 with Lambda Preview b68

    - by $utils.escapeXML($entry.author)
    This week brought along new developer preview releases of the JDK and related projects. On the JDK 7 side, the Java™ Platform, Standard Edition 7 Update 12 Developer Preview Releases have been updated to 7u12 Build b05. On the JDK 8 side, as Mike Duigou announced on the lambda-dev mailing list, A new promotion (b68) of preview binaries for OpenJDK Java 8 with lambda extensions is now available at http://jdk8.java.net/lambda/. Happy testing!

    Read the article

  • Generating Data for Database Tests

    It is more and more essential for developers to work on development databases that have realistic data in both type and quantity, but without using real data. It isn't exactly easy, even with third-party tools to hand. Phil Factor shows how it can be done, taking the classic PUBS database and giving it a more realistic set of data. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • SQL Server 2008 R2 'Madison' Undergoing Final Tests

    Microsoft on Friday announced that it had released the final Parallel Data Warehouse version of SQL Server 2008 R2 to testers....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • TeamCity and pending Git merge branch commit keeps build with failed tests

    - by Vladimir
    We use TeamCity for continuous integration and Git for source control. Generally it works pretty well - convenient, modern and good us quick feedback when tests fails. There is a strange behavior related to Git merge specifics. Here are steps of the case: First developer pulls from master repo. Second developer pulls from master repo. First developer makes commit A locally. Second developer makes commit B locally; Second developer pushes commit B. First developer want to push commit A but unable because he have to pull commit B first. First developer pull's from remote reposity. First developer pushes commit A and generated merge branch commit. The history of commits in master repo is following: B second developer A first developer merge branch first developer. Now let's assume that Second Developer fixed some failing tests in his commit B. What TeamCity will do is following: Commit B arrives - TeamCity makes build #1 with all tests passed Commit A arrives - TeamCity makes build #2 (without commit B) test bar becomes Red! TeamCity thought that Pending "Merge Branch" commit doesn't contain any changes (any new files) - but it actually does contain the merge of commit B, so the TeamCity don't want to make new build here and make tests green. Here are two problems: 1. In our case we have failed tests returning back in second commit (commit A) 2. TeamCity don't want to make a new build and make tests back green. Does anybody know how to fix both of this problems. I consider some reasonable general approach.

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Is it a good idea to mock/stub in integration tests?

    - by ez
    Say there are multiple requests in a integration test, some of them are sphinx calls(locator for example). Should we just stub out the entire response of these sphinx call, or, since it is a integration test, we want to excise the entire test without stubbing. If that is the case, how do we still keep test independent in the situation when sphinx fails, no internet connection, or third party server non-responsive. Give reasons. Thanks

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • How to do integrated testing?

    - by Enthusiastic Programmer
    So I have been reading up on a lot of books surrounding testing. But all the books I've read have the same flaws. They will all tell you the definitions of testing. But I have not found a single book that will guide you into integration testing (or pretty much anything higher then unit testing). Is integration testing that elusive or am I reading the wrong books? I'm a hands on person, so I would appreciate it if someone could help me with a simple program: Let's say you need to make some sort of calculation program that calculates something (doesn't matter what) and exports it to *.txt file. Let's assume we use the Model View Controller design principle. And one class for the actual calculating which you'll use in the model and one for writing the textfile. So: View = Controller = Model = CalculationClass, FileClass So for unittesting: You'd test the calculationClass, I'd personally focus most of my unit tests there. And less time on unit testing the view/controller/FileClass. I personally wouldn't see the use of unittesting those unless you want a really robust program. Integration testing: Now this is where I run into a wall. What would I have to test to call it an integration test? I could stub the view and feed the controller data which it would pass on to the model and so forth. And then check what the view gets back in the end. But ... Couldn't I just run the (in this case small) program then and test it manually? Would this be considered a integration test too, or does it have to be automated? Also, can I check multiple items to see if they are correct? I cannot seem to find any book that offers a hands on approach to methods of integration testing.

    Read the article

  • Automated testing in Android development

    - by Sara
    I have an ordinary project with JUnit tests that are connected to the classes in my Android Project. I want my server to run some JUnit tests in my testproject everytime I commit my code from my Android Project. Is there a best practise to do this? So far I only managed to run the tests when they are a part of a while the JUnit tests and Android classes are separated into 2 different projects, since JUnit runs on JVM and Android in an emulator on DVM (Dalvik Virtual Machine).

    Read the article

  • ServiceLocator not initialized in Tests project

    - by Carl Bussema
    When attempting to write a test related to my new Tasks (MVC3, S#arp 2.0), I get this error when I try to run the test: MyProject.Tests.MyProject.Tasks.CategoryTasksTests.CanConfirmDeleteReadiness: SetUp : System.NullReferenceException : ServiceLocator has not been initialized; I was trying to retrieve SharpArch.NHibernate.ISessionFactoryKeyProvider ---- System.NullReferenceException : Object reference not set to an instance of an object. at SharpArch.Domain.SafeServiceLocator1.GetService() at SharpArch.NHibernate.SessionFactoryKeyHelper.GetKeyFrom(Object anObject) at SharpArch.NHibernate.NHibernateRepositoryWithTypedId2.get_Session() at SharpArch.NHibernate.NHibernateRepositoryWithTypedId2.Save(T entity) at MyProject.Tests.MyProject.Tasks.CategoryTasksTests.Setup() in C:\code\MyProject\Solutions\MyProject.Tests\MyProject.Tasks\CategoryTasksTests.cs:line 36 --NullReferenceException at Microsoft.Practices.ServiceLocation.ServiceLocator.get_Current() at SharpArch.Domain.SafeServiceLocator1.GetService() Other tests which do not involve the new class (e.g., generate/confirm database mappings) run correctly. My ServiceLocatorInitializer is as follows public class ServiceLocatorInitializer { public static void Init() { IWindsorContainer container = new WindsorContainer(); container.Register( Component .For(typeof(DefaultSessionFactoryKeyProvider)) .ImplementedBy(typeof(DefaultSessionFactoryKeyProvider)) .Named("sessionFactoryKeyProvider")); container.Register( Component .For(typeof(IEntityDuplicateChecker)) .ImplementedBy(typeof(EntityDuplicateChecker)) .Named("entityDuplicateChecker")); ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container)); } }

    Read the article

  • Problem with Authlogic and Unit/Functional Tests in Rails

    - by mmacaulay
    I'm learning how unit testing is done in Rails, and I've run into a problem involving Authlogic. According to the Documentation there are a few things required to use Authlogic stuff in your tests: test_helper.rb: require "authlogic/test_case" class ActiveSupport::TestCase setup :activate_authlogic end Then in my functional tests I can login users: UserSession.create(users(:tester)) The problem seems to stem from the setup :activate_authlogic line in test_helper.rb, whenever that is included, I get the following errors when running functional tests: NoMethodError: undefined method `request=' for nil:NilClass authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `send' authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `method_missing' If I remove setup :activate_authlogic and add instead Authlogic::Session::Base.controller = Authlogic::ControllerAdapters::RailsAdapter.new(self) to test_helper.rb, my functional tests seem to work but now my unit tests fail: NoMethodError: undefined method `params' for ActiveSupport::TestCase:Class authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:30:in `params' authlogic (2.1.3) lib/authlogic/session/params.rb:96:in `params_credentials' authlogic (2.1.3) lib/authlogic/session/params.rb:72:in `params_enabled?' authlogic (2.1.3) lib/authlogic/session/params.rb:66:in `persist_by_params' authlogic (2.1.3) lib/authlogic/session/callbacks.rb:79:in `persist' authlogic (2.1.3) lib/authlogic/session/persistence.rb:55:in `persisting?' authlogic (2.1.3) lib/authlogic/session/persistence.rb:39:in `find' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:96:in `get_session_information' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `each' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `get_session_information' /test/unit/user_test.rb:23:in `test_should_save_user_with_email_password_and_confirmation' What am I doing wrong?

    Read the article

  • Silverlight tests not working unless RDP connection open

    - by Duncan Bayne
    I have a few Silverlight UI tests that I'm automating with White. These tests are subsequently run by a TFS build agent, which is running interactively so it can access the desktop. The build passes if I have a Remote Desktop connection open to the build agent as the tests are run; I can see the mouse pointer moving around. When the test clicks on a HyperlinkButton navigation takes place, and is subsequently verified by assertions within the test. The build fails if I do not have a Remote Desktop connection open to the build agent as the tests are run. The Internet Explorer window is created and the Silverlight app loads, but no clicks happen; the application remains on the initial page and test assertions subsequently fail. Has anyone out there found a solution to this problem?

    Read the article

  • Using Unit Tests While Developing Static Libraries in Obj-C

    - by macinjosh
    I'm developing a static library in Obj-C for a CocoaTouch project. I've added unit testing to my Xcode project by using the built in OCUnit framework. I can run tests successfully upon building the project and everything looks good. However I'm a little confused about something. Part of what the static library does is to connect to a URL and download the resource there. I constructed a test case that invokes the method that creates a connection and ensures the connection is successful. However when my tests run a connection is never made to my testing web server (where the connection is set to go). It seems my code is not actually being ran when the tests happen? Also, I am doing some NSLog calls in the unit tests and the code they run, but I never see those. I'm new to unit testing so I'm obviously not fully grasping what is going on here. Can anyone help me out here? P.S. By the way these are "Logical Tests" as Apple calls them so they are not linked against the library, instead the implementation files are included in the testing target.

    Read the article

  • Running NUnit tests in Visual Studio 2010 with code coverage

    - by adrianbanks
    We have recently upgraded from Visual Studio 2008 to Visual Studio 2010. As part of our code base, we have a very large set of NUnit tests. We would like to be able to run these unit tests within Visual Studio, but with code coverage enabled. We have ReSharper, so can run the tests within Visual Studio, but it does not allow the code coverage tool to do its thing and generate the coverage statistics. Is there any way to make this work, or will we have to convert the tests over to MSTest?

    Read the article

  • Visual Studio 2010 does not discover new unit tests

    - by driis
    I am writing some unit tests in Visual Studio 2010. I can run all tests by using "Run all Tests in Current Context". However, if I write a new unit test, it does not get picked up by the environment - in other words, I am not able to find it in Test List Editor, by running all tests, or anywhere else. If I unload the project and then reload it; the new test is available to run. When I am adding a unit test, I simply add a new method to an already existing TestClass and decorating it with [TestMethod] attribute - nothing fancy. What might be causing this behaviour, and how do I make it work ?

    Read the article

  • Should tests be self written in TDD?

    - by martin
    We run a project, which we want to solve with test driven development. I thought about some questions that came up, when initiating the project. One question was, who should write the unit-test for a feature. Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If i understand the concept of TDD in the right way. The feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say, should the tests in TDD written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Is it possible to have mspec & NUnit tests in a single project

    - by Mike Scott
    I've got a unit test project using NUnit. When I add the mspec (machine.specifications) assembly to the references, both ReSharper and TestDriven.Net stop running the NUnit tests and only run the mspec tests. Is there a way or setting that allows both NUnit & mspec tests to co-exist and run in the same project using R# & TD.Net test runners?

    Read the article

  • Problem with SQLite related nUnit-tests after upgrade to VS2010 and Re#5

    - by stiank81
    After converting to Visual Studio 2010 with ReSharper5 some of my unit tests started failing. More specifically this applies to all unit tests that use NHibernate with SQLite. The problem seem to be related to SQLite somehow. The unit tests that does not involve NHibernate and SQLite are still running fine. The exception is as follows: NHibernate.HibernateException : Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. ----> System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation. ----> NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The exception is the NullReferenceException on TearDown when cleaning up NHibernate objects that wasn't successfully created, but the problem seem to be related to SQLite somehow. I run my unit tests through ReSharper, but I get the same exception when running them directly through the NUnit.exe application. However, running them through the x86 variant (NUnit-x86.exe) all tests run fine. Can it be related to some mixing of 64bit and 32bit dlls? It still runs fine through VS2008 + ReSharper4.5. Note that the target framework of my projects still is .NET3.5. Anyone seen this problem before?

    Read the article

  • Visual Studio 2010 failed tests throw exceptions

    - by Dave Hanson
    In VisualStudio2010 Ultimate RC I cannot figure out how to suppress {"CollectionAssert.AreEqual failed. (Element at index 0 do not match.)"} from Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException If i Ctrl+Alt+E I get the exception dialog; however that exception doesn't seem to be in there to be suppressed. Does anyone else have any experience with this? I don't remember having to suppress these Assert fails in studio 2008 when running unit tests. My tests would fail and I could just click on the TestResults to see which tests failed instead of fighting through these dialogs. For now I guess I'll just run my tests through the command window.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >