Search Results

Search found 5709 results on 229 pages for 'persistence unit'.

Page 3/229 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Are unit tests really used as documentation?

    - by stijn
    I cannot count the number of times I read statements in the vein of 'unit tests are a very important source of documentation of the code under test'. I do not deny they are true. But personally I haven't found myself using them as documentation, ever. For the typical frameworks I use, the method declarations document their behaviour and that's all I need. And I assume the unit tests backup everything stated in that documentation, plus likely some more internal stuff, so on one side it duplicates the ducumentation while on the other it might add some more that is irrelevant. So the question is: when are unit tests used as documentation? When the comments do not cover everything? By developpers extending the source? And what do they expose that can be useful and relevant that the documentation itself cannot expose?

    Read the article

  • Is static universally "evil" for unit testing and if so why does resharper recommend it?

    - by Vaccano
    I have found that there are only 3 ways to unit test (mock/stub) dependencies that are static in C#.NET: Moles TypeMock JustMock Given that two of these are not free and one has not hit release 1.0, mocking static stuff is not too easy. Does that make static methods and such "evil" (in the unit testing sense)? And if so, why does resharper want me to make anything that can be static, static? (Assuming resharper is not also "evil".) Clarification: I am talking about the scenario when you want to unit test a method and that method calls a static method in a different unit/class. By most definitions of unit testing, if you just let the method under test call the static method in the other unit/class then you are not unit testing, you are integration testing. (Useful, but not a unit test.)

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Is JPA persistence.xml classpath located?

    - by Vinnie
    Here's what I'm trying to do. I'm using JPA persistence in a web application, but I have a set of unit tests that I want to run outside of a container. I have my primary persistence.xml in the META_INF folder of my main app and it works great in the container (Glassfish). I placed a second persistence.xml in the META-INF folder of my test-classes directory. This contains a separate persistence unit that I want to use for test only. In eclipse, I placed this folder higher in the classpath than the default folder and it seems to work. Now when I run the maven build directly from the command line and it attempts to run the unit tests, the persistence.xml override is ignored. I can see the override in the META-INF folder of the maven generated test-classes directory and I expected the maven tests to use this file, but it isn't. My Spring test configuration overrides, achieved in a similar fashion are working. I'm confused at to whether the persistence.xml is located through the classpath. If it were, my override should work like the spring override since the maven surefire plugin explains "[The test class directory] will be included at the beginning the test classpath". Did I wrongly anticipate how the persistence.xml file is located? I could (and have) create a second persistence unit in the production persistence.xml file, but it feels dirty to place test configuration into this production file. Any other ideas on how to achieve my goal is welcome.

    Read the article

  • Mock Objects for Unit Testing

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks

    Read the article

  • Quality of Code in unit tests?

    - by m3th0dman
    Is it worth to spend time when writing unit tests in order that the code written there has good quality and is very easy to read? When writing this kinds of tests I break very often the Law of Demeter, for faster writing and not using so many variables. Technically, unit tests are not reused directly - are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functionaly.

    Read the article

  • How have Guava unit tests been generated automatically?

    - by dzieciou
    Guava has unit test cases automatically generated: Guava has staggering numbers of unit tests: as of July 2012, the guava-tests package includes over 286,000 individual test cases. Most of these are automatically generated, not written by hand, but Guava's test coverage is extremely thorough, especially for com.google.common.collect. How they were generated? What techniques and technologies were used to design and generate them?

    Read the article

  • Unit-testing code that relies on untestable 3rd party code

    - by DudeOnRock
    Sometimes, especially when working with third party code, I write unit-test specific code in my production code. This happens when third party code uses singletons, relies on constants, accesses the file-system/a resource I don't want to access in a test situation, or overuses inheritance. The form my unit-test specific code takes is usually the following: if (accessing or importing a certain resource fails) I assume this is a test case and load a mock object Is this poor form, and if it is, what is normally done when writing tests for code that uses untestable third party code?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • how to architect this to make it unit testable

    - by SOfanatic
    I'm currently working on a project where I'm receiving an object via web service (WSDL). The overall process is the following: Receive object - add/delete/update parts (or all) of it - and return the object with the changes made. The thing is that sometimes these changes are complicated and there is some logic involved, other databases, other web services, etc. so to facilitate this I'm creating a custom object that mimics the original one but has some enhanced functionality to make some things easier. So I'm trying to have this process: Receive original object - convert/copy it to custom object - add/delete/update - convert/copy it back to original object - return original object. Example: public class Row { public List<Field> Fields { get; set; } public string RowId { get; set; } public Row() { this.Fields = new List<Field>(); } } public class Field { public string Number { get; set; } public string Value { get; set; } } So for example, one of the "actions" to perform on this would be to find all Fields in a Row that match a Value equal to something, and update them with some other value. I have a CustomRow class that represents the Row class, how can I make this class unit testable? Do I have to create an interface ICustomRow to mock it in the unit test? If one of the actions is to sum all of the Values in the Fields that have a Number equal to 10, like this function, how can design the custom class to facilitate unit tests. Sample function: public int Sum(FieldNumber number) { return row.Fields.Where(x => x.FieldNumber.Equals(number)).Sum(x => x.FieldValue); } Am I approaching this the wrong way?

    Read the article

  • GH-Unit for unit testing Objective-C code, why am I getting linking errors?

    - by djhworld
    Hi there, I'm trying to dive into the quite frankly terrible world of unit testing using Xcode (such a convoluted process it seems.) Basically I have this test class, attempting to test my Show.h class #import <GHUnit/GHUnit.h> #import "Show.h" @interface ShowTest : GHTestCase { } @end @implementation ShowTest - (void)testShowCreate { Show *s = [[Show alloc] init]; GHAssertNotNil(s,@"Was nil."); } @end However when I try to build and run my tests it moans with this error: - Undefined symbols: "_OBJC_CLASS_$_Show", referenced from: __objc_classrefs__DATA@0 in ShowTest.o ld: symbol(s) not found collect2: ld returned 1 exit status Now I'm presuming this is a linking error. I tried following every step in the instructions located here: - http://github.com/gabriel/gh-unit/blob/master/README.md And step 2 of these instructions confused me: - In the Target 'Tests' Info window, General tab: Add a linked library, under Mac OS X 10.5 SDK section, select GHUnit.framework Add a linked library, select your project. Add a direct dependency, and select your project. (This will cause your application or framework to build before the test target.) How am I supposed to add my project to the linked library list when all it accepts it .dylib, .framework and .o files. I'm confused! Thanks for any help that is received.

    Read the article

  • ASP.NET MVC unit testing

    - by Simon Lomax
    Hi, I'm getting started with unit testing and trying to do some TDD. I've read a fair bit about the subject and written a few tests. I just want to know if the following is the right approach. I want to add the usual "contact us" facility on my web site. You know the thing, the user fills out a form with their email address, enters a brief message and hits a button to post the form back. The model binders do their stuff and my action method accepts the posted data as a model. The action method would then parse the model and use smtp to send an email to the web site administrator infoming him/her that somebody filled out the contact form on their site. Now for the question .... In order to test this, would I be right in creating an interface IDeliver that has a method Send(emailAddress, message) to accept the email address and message body. Implement the inteface in a concrete class and let that class deal with smtp stuff and actually send the mail. If I add the inteface as a parameter to my controller constructor I can then use DI and IoC to inject the concrete class into the controller. But when unit testing I can create a fake or mock version of my IDeliver and do assertions on that. The reason I ask is that I've seen other examples of people generating interfaces for SmtpClient and then mocking that. Is there really any need to go that far or am I not understanding this stuff?

    Read the article

  • Creating mock Objects in PHP unit

    - by Mike
    Hi, I've searched but can't quite find what I'm looking for and the manual isn't much help in this respect. I'm fairly new to unit testing, so not sure if I'm on the right track at all. Anyway, onto the question. I have a class: <?php class testClass { public function doSomething($array_of_stuff) { return AnotherClass::returnRandomElement($array_of_stuff); } } ?> Now, clearly I want the AnotherClass::returnRandomElement($array_of_stuff); to return the same thing every time. My question is, in my unit test, how do I mockup this object? I've tried adding the AnotherClass to the top of the test file, but when I want to test AnotherClass I get the "Cannot redeclare class" error. I think I understand factory classes, but I'm not sure how I would apply that in this instance. Would I need to write an entirely seperate AnotherClass class which contained test data and then use the Factory class to load that instead of the real AnotherClass? Or is using the Factory pattern just a red herring. I tried this: $RedirectUtils_stub = $this->getMockForAbstractClass('RedirectUtils'); $o1 = new stdClass(); $o1->id = 2; $o1->test_id = 2; $o1->weight = 60; $o1->data = "http://www.google.com/?ffdfd=fdfdfdfd?route=1"; $RedirectUtils_stub->expects($this->any()) ->method('chooseRandomRoot') ->will($this->returnValue($o1)); $RedirectUtils_stub->expects($this->any()) ->method('decodeQueryString') ->will($this->returnValue(array())); in the setUp() function, but these stubs are ignored and I can't work out whether it's something I'm doing wrong, or the way I'm accessing the AnotherClass methods. Help! This is driving me nuts.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • How to fix “Unit Test Runner failed to load test assembly”

    - by ybbest
    I encountered this issue a couple times during my recent project, every time I forgot what actually cause the issue. Therefore, I decide to write a quick blog post to make sure I can identify the issue quickly. Problem: Run unit test using a test runner and received a Unit Test Runner failed to load test assembly exception. Analysis: Basically, I have changed some code and start the test runner to run tests. The same dll have already been deployed to GAC. So the test runner actually tries to use the old version of the assembly thus could not load the assembly. Solution: Deploy the current version of dll to the GAC and re-run your test, it works like a charm.

    Read the article

  • How important are unit tests in software development?

    - by Lo Wai Lun
    We are doing software testing by testing a lot of I/O cases, so developers and system analysts can open reviews and test for their committed code within a given time period (e.g. 1 week). But when it come across with extracting information from a database, how to consider the cases and the corresponding methodology to start with? Although that is more likely to be a case studies because the unit-testing depends on the project we have involved which is too specific and particular most of the time. What is the general overview of the steps and precautions for unit-testing?

    Read the article

  • How does one unit test an algorithm

    - by Asa Baylus
    I was recently working on a JS slideshow which rotates images using a weighted average algorithm. Thankfully, timgilbert has written a weighted list script which implements the exact algorithm I needed. However in his documentation he's noted under todos: "unit tests!". I'd like to know is how one goes about unit testing an algorithm. In the case of a weighted average how would you create a proof that the averages are accurate when there is the element of randomness? Code samples of similar would be very helpful to my understanding.

    Read the article

  • Azure price through Unit Testing

    - by mrtentje
    For I project I am trying to find a way to measure an estimation of the costs of an Azure application through Unit Testing. Likely I will extend the Visual Studio Unit Testing framework (or another solution is also possible as long as it can run together (same time/side by side, when the Visual Studio Framework will run some tests the Azure solution must also run (if it is an Azure project)) with the Visual Studio Testing framework. A (Visual Studio) extension will be build to reuse it for future projects. Does anyone has any experience or any ideas how this can be achieved? Thanks in advance

    Read the article

  • Introduce unit testing when codebase is already available

    - by McMannus
    I've been working on a project in Flex for three years now without unit testing. The simple reason for that is the fact that I just didn't realize the importance of unit testing when being at the beginning of studies at university. Now my attitude towards testing changed completely and therefore I want to introduce it to the existing project (about 25000LOC). In order to do it, there are two approaches to choose from: 1) Discard the existing codebase and start from scratch with TDD 2) Write the tests and try to make them pass by changing the existing code Well, I would appreciate not having to write everything from scratch but I think by doing this, the design would be much better. What would you advise me to do? Thanks for replies in advance! Jan

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >