Search Results

Search found 544 results on 22 pages for 'tdd'.

Page 12/22 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How has test first development changed the way you write software?

    - by Toran Billups
    I've started to find that I can't write software without writing a test first. I ask this subjective question because I want to hear what others in the community think about the reasons I can't go back to writing production code without a test first. If you can't write a test for something you don't understand it Without a regression test you can't clean the code You are going to test it anyway, spend the time to do it right Evolutionary design is possible without fear You actually write less code yourself Fast feedback cycles save time and money Job security (less bugs makes your boss happy) It actually makes my work more enjoyable

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Nmock2 and Event Expectations

    - by Kildareflare
    Im in the process of writing a test for a small application that follows the MVP pattern. Technically, I know I should have written the test before the code, but I needed to knock up a demo app quick smart and so im now going back to the test before moving on to the real development. In short I am attempting to test the presenter, however I cannot even get an empty test to run due to an Internal.ExpectationException. The exception is raised on a unexpected invocation of an event assignation. Here is the presenter class, public LBCPresenter(IView view, IModel model) { m_model = model; m_model.BatteryModifiedEvent += new EventHandler(m_model_BatteryModifiedEvent); } Model Interface public interface IModel { event EventHandler BatteryModifiedEvent; } And here is the test class, I can't see what im missing, ive told NMock to expect the event... [TestFixture] public class MVP_PresenterTester { private Mockery mocks; private IView _mockView; private IViewObserver _Presenter; private IModel _mockModel; [SetUp] public void SetUp() { mocks = new Mockery(); _mockView = mocks.NewMock<IView>(); _mockModel = mocks.NewMock<IModel>(); _Presenter = new LBCPresenter(_mockView, _mockModel); } [Test] public void TestClosingFormWhenNotDirty() { Expect.Once.On(_mockModel).EventAdd("BatteryModifiedEvent", NMock2.Is.Anything); //makes no difference if following line is commented out or not //mocks.VerifyAllExpectationsHaveBeenMet(); } } Every time I run the test I get the same expectation Exception. Any ideas?

    Read the article

  • Cucumber and Silverlight 4

    - by weijiajun
    So I am wondering if anyone is familiar or has done any work with cucumber and Silverlight. I currently have a template directory and build file that will create RSpec tests using Bacon (light weight RSpec). I have been looking into SpecFlow and Cuke2Nuke but almost everything I have seen works with general .net code not silverlight code. Thanks.

    Read the article

  • Unit testing ASP.NET Code behind.

    - by user102533
    I've been reading about MVC in which the authors suggest that testability is one of the major strengths of MVC. They go to compare it with ASP.NET WebForms and how difficult it is to test the code behind in WebForms. I do understand it's difficult but can someone explain how unit tests were written to test code behind logic in the old days?

    Read the article

  • Reflection in unit tests for checking code coverage

    - by Gary
    Here's the scenario. I have VO (Value Objects) or DTO objects that are just containers for data. When I take those and split them apart for saving into a DB that (for lots of reasons) doesn't map to the VO's elegantly, I want to test to see if each field is successfully being created in the database and successfully read back in to rebuild the VO. Is there a way I can test that my tests cover every field in the VO? I had an idea about using reflection to iterate through the fields of the VO's as part of the solution, but maybe you guys have solved the problem before? I want this test to fail when I add fields in the VO, and don't remember to add checks for it in my tests.

    Read the article

  • assistance with classifying tests

    - by amateur
    I have a .net c# library that I have created that I am currently creating some unit tests for. I am at present writing unit tests for a cache provider class that I have created. Being new to writing unit tests I have 2 questions These being: My cache provider class is the abstraction layer to my distributed cache - AppFabric. So to test aspects of my cache provider class such as adding to appfabric cache, removing from cache etc involves communicating with appfabric. Therefore the tests to test for such, are they still categorised as unit tests or integration tests? The above methods I am testing due to interacting with appfabric, I would like to time such methods. If they take longer than a specified benchmark, the tests have failed. Again I ask the question, can this performance benchmark test be classifed as a unit test? The way I have my tests set up I want to include all unit tests together, integration tests together etc, therefore I ask these questions that I would appreciate input on.

    Read the article

  • C# Visual Studio Unit Test, Mocking up a client IP address

    - by Jimmy
    Hey guys, I am writing some unit tests and I'm getting an exception thrown from my real code when trying to do the following: string IPaddress = HttpContext.Current.Request.UserHostName.ToString(); Is there a way to mock up an IP address without rewriting my code to accept IP address as a parameter? Thanks!

    Read the article

  • How to run only the latest/a given test using Rspec?

    - by marcgg
    Let's say I have a big spec file with 20 tests because I'm testing a large model and I had no other way of doing it : describe Blah it "should do X" do ... end it "should do Y" do ... end ... it "should do Z" do ... end end Running a single file is faster than running the whole test suite, but it's still pretty long. Is there a way to run the last one (ie the one at the end of the file, here "should do Z")? If this is not possible, is there a way to specify which test I want to run in my file ?

    Read the article

  • When is it appropriate to do interaction based testing as opposed to state based testing?

    - by Praneeth
    Hi, When I use Easymock(or a similar mocking framework) to implement my unit tests, I'm forced to do interaction-based testing (as I don't get to assert on the state of my dependencies. Or am I mistaken?). On the other hand if I use a hand written stub (instead of using easymock) I can implement state based testing. I'm quite unclear if I want to go with interaction based testing or state based testing. I'm biased and I want to use Easymock, but I'm not sure if there would be any side-effects that I may have to face in the future. Can anyone please throw some light on this? Thanks in advance!

    Read the article

  • Rhino Mocks - Do we really need stubs?

    - by Marcelo Oliveira
    If it's possible to change mock behaviour in Rhino Mocks using mock.Stub().Return(), why do we need Stubs anyway? What do we lose by always using MockRepository.GenerateMock()? One big benefit of using Mocks instead of Stubs is that we will be able to reuse the same instance among all the tests keeping them cleaner and straightforward. The moq framework works in a similar way... we don't have different objects for mocks and stubs. (please, don't answer with a link to Fowler's "Mocks aren't stubs" article)

    Read the article

  • How to write a jUnit test for this class?

    - by flash
    Hi, I would like to know what's the best approach to test the method "pushEvent()" in the following class with a jUnit test. My problem is, that the private method "callWebsite()" always requires a connection to the network. How can I avoid this requirement or refactor my class that I can test it without a connection to the network? class MyClass { public String pushEvent (Event event) { //do something here String url = constructURL (event); //construct the website url String response = callWebsite (url); return response; } private String callWebsite (String url) { try { URL requestURL = new URL (url); HttpURLConnection connection = null; connection = (HttpURLConnection) requestURL.openConnection (); String responseMessage = responseParser.getResponseMessage (connection); return responseMessage; } catch (MalformedURLException e) { e.printStackTrace (); return e.getMessage (); } catch (IOException e) { e.printStackTrace (); return e.getMessage (); } } }

    Read the article

  • Multiple asserts in single test?

    - by Gern Blandston
    Let's say I want to write a function that validates an email address with a regex. I write a little test to check my function and write the actual function. Make it pass. However, I can come up with a bunch of different ways to test the same function ([email protected]; [email protected]; test.test.com, etc). Do I put all the incantations that I need to check in the same, single test with several ASSERTS or do I write a new test for every single thing I can think of? Thanks!

    Read the article

  • Is there value in unit testing auto implemented properties

    - by ahsteele
    It seems exceptionally heavy handed but going by the rule anything publicly available should be tested should auto-implemented properties be tested? Customer Class public class Customer { public string EmailAddr { get; set; } } Tested by [TestClass] public class CustomerTests : TestClassBase { [TestMethod] public void CanSetCustomerEmailAddress() { //Arrange Customer customer = new Customer(); //Act customer.EmailAddr = "[email protected]"; //Assert Assert.AreEqual("[email protected]", customer.EmailAddr); } }

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Convert C# unit test names to English (testdox style)

    - by Igor Zevaka
    I have a whole bunch of unit tests written in MbUnit and I would like to generate plain English sentences from test names. The concept is introduced here: http://dannorth.net/introducing-bdd This is from the article: public class CustomerLookupTest extends TestCase { testFindsCustomerById() { ... } testFailsForDuplicateCustomers() { ... } ... } renders something like this: CustomerLookup - finds customer by id - fails for duplicate customers - ... Unfortunately the tool quoted in the above article (testdox) is Java based. Is there one for .NET? Sounds like this would be something pretty simple to write, but I simply don't have the bandwidth and want to use something already written.

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • A standard event messaging system with AJAX?

    - by Gutzofter
    Is there any standards or messaging framework for AJAX? Right now I have a single page that loads content using Ajax. Because I had a complex form for data entry as part of my content, I need to validate certain events that can occur in my form. So after some adjustments driven by my tests: asyncShould("search customer list click", 3, function() { stop(1000); $('#content').show(); var forCustomerList = newCustomerListRequest(); var forShipAndCharge = newShipAndChargeRequest(forCustomerList); forCustomerList.page = '../../vt/' + forCustomerList.page; forShipAndCharge.page = 'helpers/helper.php'; forShipAndCharge.data = { 'action': 'shipAndCharge', 'DB': '11001' }; var originalComplete = forShipAndCharge.complete; forShipAndCharge.complete = function(xhr, status) { originalComplete(xhr, status); ok($('#customer_edit').is(":visible"), 'Shows customer editor'); $('#search').click(); ok($('#customer_list').is(":visible"), 'Shows customer list'); ok($('#customer_edit').is(":hidden"), 'Does not show customer editor'); start(); }; testController.getContent(forShipAndCharge); }); Here is the controller for getting content: getContent: function (request) { $.ajax({ type: 'GET', url: request.page, dataType: 'json', data: request.data, async: request.async, success: request.success, complete: request.complete }); }, And here is the request event: function newShipAndChargeRequest(serverRequest) { var that = { serverRequest: serverRequest, page: 'nodes/orders/sc.php', data: 'customer_id=-1', complete: errorHandler, success: function(msg) { shipAndChargeHandler(msg); initWhenCustomer(that.serverRequest); }, async: true }; return that; } And here is a success handler: function shipAndChargeHandler(msg) { $('.contentContainer').html(msg.html); if (msg.status == 'flash') { flash(msg.flash); } } And on my server side I end up with a JSON structure that looks like this: $message['status'] = 'success'; $message['data'] = array(); $message['flash'] = ''; $message['html'] = ''; echo json_encode($message); So now loading content consists of two parts: HTML, this is the presentation of the form. DATA, this is any data that needs be loaded for the form FLASH, any validation or server errors STATUS tells client what happened on server. My question is: Is this a valid way to handle event messaging on the client-side or am I going down a path of heartache and pain?

    Read the article

  • How do you tell that your unit tests are correct?

    - by Jacob Adams
    I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem? Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing. Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

    Read the article

  • question about learning TDD

    - by Gandalf StormCrow
    what are the best books to learn about junit, jmock and testing generally? Currently I'm reading pragmatic unit testing in Java, I'm on chapter 6 its good but it gets complicated.. is there a book for a bottom up? from your expirience which helped you get the testing concept

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Does isolation frameworks (Moq, RhinoMock, etc) lead to test overspecification?

    - by Marius
    In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented. So what do you think is the best approach to counter this? 1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass. 2) Favor manual, fully implemented fakes. 3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.

    Read the article

  • BDD-testing using a UI driver (e.g. Selenium for a web-application)

    - by jonathanconway
    Can BDD (Behavior Driven Design) tests be implemented using a UI driver? For example, given a web application, instead of: Writing tests for the back-end, and then more tests in Javascript for the front-end Should I: Write the tests as Selenium macros, which simulate mouse-clicks, etc in the actual browser? The advantages I see in doing it this way are: The tests are written in one language, rather than several They're focussed on the UI, which gets developers thinking outside-in They run in the real execution environment (the browser), which allows us to Test different browsers Test different servers Get insight into real-world performance Thoughts?

    Read the article

  • How do you unit-test a method with complex input-output

    - by Dan
    When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert. What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example <root><child/></root> that will be used to check parent-child relationships between nodes and so on for the rest of expectations. Now, take a look at follwing Xml: <root> <child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1> <child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2> </root> In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?

    Read the article

  • How do I enable code coverage in Visual Studio 2005?

    - by CandlesOfThe
    I have looked at this question; http://stackoverflow.com/questions/2872158/ and the F1 page, but that doesn't help me much. I have set the profiling on and rebuilt, but I can't find the 'Data and Diagnostics' page, or see anything which resembles a coverage data file in the project folder. What I am trying to do get an equivalent to 'gcov' on a Linux platform, get a chart of how much code is being missed by the test suite. I'm using Visual Studio 2005 Professional Edition and UnitTest++ as the test framework. Any help would be most welcome.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >