Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 2/313 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • L'excès de tests unitaires nuirait au développement agile, ils seraient favorisés par rapport aux tests d'intégration

    L'excès de tests unitaires nuirait au développement agile Ils seraient favorisés par rapport aux tests d'intégrationBien souvent, le développement agile mise sur le développement piloté par les tests (TDD). Aujourd'hui, Mark Balbes, un des membres les plus éminents de Asynchrony Solutions et expert en développement logiciel et en gestion de projet agile, nous livre sa vision des faits en ce qui concerne le TDD.L'expert estime qu'actuellement, le développement agile use excessivement du TDD, les...

    Read the article

  • Does unit testing lead to premature generalization (specifically in the context of C++)?

    - by Martin
    Preliminary notes I'll not go into the distinction of the different kinds of test there are, there are already a few questions on these sites regarding that. I'll take what's there and that says: unit testing in the sense of "testing the smallest isolatable unit of an application" from which this question actually derives The isolation problem What is the smallest isolatable unit of a program. Well, as I see it, it (highly?) depends on what language you are coding in. Micheal Feathers talks about the concept of a seam: [WEwLC, p31] A seam is a place where you can alter behavior in your program without editing in that place. And without going into the details, I understand a seam -- in the context of unit testing -- to be a place in a program where your "test" can interface with your "unit". Examples Unit test -- especially in C++ -- require from the code under test to add more seams that would be strictly called for for a given problem. Example: Adding a virtual interface where non-virtual implementation would have been sufficient Splitting -- generalizing(?) -- a (smallish) class further "just" to facilitate adding a test. Splitting a single-executable project into seemingly "independent" libs, "just" to facilitate compiling them independently for the tests. The question I'll try a few versions that hopefully ask about the same point: Is the way that Unit Tests require one to structure an application's code "only" beneficial for the unit tests or is it actually beneficial to the applications structure. Is the generalization code need to exhibit to be unit-testable useful for anything but the unit tests? Does adding unit tests force one to generalize unnecessarily? Is the shape unit tests force on code "always" also a good shape for the code in general as seen from the problem domain? I remember a rule of thumb that said don't generalize until you need to / until there's a second place that uses the code. With Unit Tests, there's always a second place that uses the code -- namely the unit test. So is this reason enough to generalize?

    Read the article

  • Learning about tests for junior programmers

    - by RHaguiuda
    I`m not sure if its okay to ask it on stackoverflow. Ive been reading a log about tests, unit tests, tests frameworks, mocks and so on, but as a junior programmer I dont know anything about tests, not even where to start! Can anyone explain to young programmers about tests, how they`re run, where and what to test, what is unit testing, integration testing, automated tests? How much to test? And more important: how much test is enough? I belive this would be very helpfull. If possible indicate a few books too about these subjects. Thanks

    Read the article

  • Oracle 64-bit assembly throws BadImageFormatException when running unit tests

    - by pjohnson
    We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:Test method MyProject.Test.SomeTest threw exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

    Read the article

  • Can unit tests verify software requirements?

    - by Peter Smith
    I have often heard unit tests help programmers build confidence in their software. But is it enough for verifying that software requirements are met? I am losing confidence that software is working just because the unit tests pass. We have experienced some failures in production deployment due to an untested\unverified execution path. These failures are sometimes quite large, impact business operations and often requires an immediate fix. The failure is very rarely traced back to a failing unit test. We have large unit test bodies that have reasonable line coverage but almost all of these focus on individual classes and not on their interactions. Manual testing seems to be ineffective because the software being worked on is typically large with many execution paths and many integration points with other software. It is very painful to manually test all of the functionality and it never seems to flush out all the bugs. Are we doing unit testing wrong when it seems we still are failing to verify the software correctly before deployment? Or do most shops have another layer of automated testing in addition to unit tests?

    Read the article

  • How to test the tests?

    - by Ryszard Szopa
    We test our code to make it more correct (actually, less likely to be incorrect). However, the tests are also code -- they can also contain errors. And if your tests are buggy, they hardly make your code better. I can think of three possible types of errors in tests: Logical errors, when the programmer misunderstood the task at hand, and the tests do what he thought they should do, which is wrong; Errors in the underlying testing framework (eg. a leaky mocking abstraction); Bugs in the tests: the test is doing slightly different than what the programmer thinks it is. Type (1) errors seem to be impossible to prevent (unless the programmer just... gets smarter). However, (2) and (3) may be tractable. How do you deal with these types of errors? Do you have any special strategies to avoid them? For example, do you write some special "empty" tests, that only check the test author's presuppositions? Also, how do you approach debugging a broken test case?

    Read the article

  • Automated unit testing, integration testing or acceptance testing

    - by bjarkef
    TDD and unit testing seems to be the big rave at the moment. But it is really that useful compared to other forms of automated testing? Intuitively I would guess that automated integration testing is way more useful than unit testing. In my experience the most bugs seems to be in the interaction between modules, and not so much the actual (usual limited) logic of each unit. Also regressions often happened because of changing interfaces between modules (and changed pre and post-conditions.) Am I misunderstanding something, or why are unit testing getting so much focus compared to integration testing? It is simply because it is assumed that integration testing is something you have, and unit testing is the next thing we need to learn to apply as developers? Or maybe unit testing simply yields the highest gain compared to the complexity of automating it? What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why? If you had to pick just one form of testing to be automated on your next project, which would it be? Thanks in advance.

    Read the article

  • CppUnit for unit-testing executable files?

    - by hagubear
    I am not sure if anyone has done it. I am trying to do something that is in general, uncommon i.e. unit-testing executable (Windows) or ELFs (Linux). I know that CppUnit provides a good unit testing facility, but I have never used it for unit-testing (used UnitTest++). I hear rumours that you can unit-test executables too. Does anyone have the experience in this? A relevant post regarding the philosophy of it was here

    Read the article

  • Link between tests and user stories

    - by Sardathrion
    I have not see these links explicitly stated in the Agile literature I have read. So, I was wondering if this approach was correct: Let a story be defined as "In order to [RESULT], [ROLE] needs to [ACTION]" then RESULT generates system tests. ROLE generates acceptance tests. ACTION generates component and unit tests. Where the definitions are the ones used in xUnit Patterns which to be fair are fairly standard. Is this a correct interpretation or did I misunderstand something?

    Read the article

  • Quality of Code in unit tests?

    - by m3th0dman
    Is it worth to spend time when writing unit tests in order that the code written there has good quality and is very easy to read? When writing this kinds of tests I break very often the Law of Demeter, for faster writing and not using so many variables. Technically, unit tests are not reused directly - are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functionaly.

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

  • Naming your unit tests

    - by kerry
    When you create a test for your class, what kind of naming convention do you use for the tests? How thorough are your tests? I have lately switched from the conventional camel case test names to lower case letters with underscores. I have found this increases the readability and causes me to write better tests. A simple utility class: public class ArrayUtils { public static T[] gimmeASlice(T[] anArray, Integer start, Integer end) { // implementation (feeling lazy today) } } I have seen some people who would write a test like this: public class ArrayUtilsTest { @Test public void testGimmeASliceMethod() { // do some tests } } A more thorough and readable test would be: public class ArrayUtilsTest { @Test public void gimmeASlice_returns_appropriate_slice() { // ... } @Test public void gimmeASlice_throws_NullPointerException_when_passed_null() { // ... } @Test public void gimmeASlice_returns_end_of_array_when_slice_is_partly_out_of_bounds() { // ... } @Test public void gimmeASlice_returns_empty_array_when_slice_is_completely_out_of_bounds() { // ... } } Looking at this test, you have no doubt what the method is supposed to do. And, when one fails, you will know exactly what the issue is.

    Read the article

  • Are there any language agnostic unit testing frameworks?

    - by Bringer128
    I have always been skeptical of rewriting working code - porting code is no exception to this. However, with the advent of TDD and automated testing it is much more reasonable to rewrite and refactor code. Does anyone know if there is a TDD tool that can be used for porting old code? Ideally you could do the following: Write up language agnostic unit tests for the old code that pass (or fail if you find bugs!). Run unit tests on your other code base that fail. Write code in your new language that passes the tests without looking at the old code. The alternative would be to split step 1 into "Write up unit tests in language 1" and "Port unit tests to language 2", which significantly increases effort required and is difficult to justify if the old code base is going to stop being maintained after the port (that is, you don't get the benefit of continuous integration on this code base). EDIT: It's worth noting this question on StackOverflow.

    Read the article

  • Should I demand unit-testing from programmers?

    - by Morten
    I work at a place, where we buy a lot of IT-projects. We are currently producing a standard for systems-requirements for the requisition of future projects. In that process, We are discussing whether or not we can demand automated unit testing from our suppliers. I firmly believe, that proper automated unit-testing is the only way to document the quality and stability of the code. Everyone else seems to think that unit-testing is an optional method that concerns the supplier alone. Thus, we will make no demands of automated unit-testing, continous testing, coverage-reports, inspections of unit-tests or any of the kind. I find this policy extremely frustrating. Am I totally out of line here? Please provide me with arguments for any of the oppinions.

    Read the article

  • Do you enjoy 'Unit testing' ? [closed]

    - by jibin
    Possible Duplicate: How have you made unit testing more enjoyable ? i mean we all are developers & we love coding.I love learning new stuff(languages, frameworks, even new domains like mobile/Tablet development). But Testing ? As a newbie to the corporate environment,I just can't digest it.(We follow 'write-then-manually-test pattern').is it unit testing ?.Usually a single developer handles a module(From design to code & unit testing).So is it practical ? Somebody tell me how to make unit testing fun ? Or just How to do it properly?Do we try all possibilities manually.Say unit test for a webpage with lot of 'javascript validations'. PS:projects are all web applications.

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • Unit Testing And Starting MongoDb Server

    - by azamsharp
    I am running some unit test that persist documents into the MongoDb database. For this unit test to succeed the MongoDb server must be started. I perform this by using Process.Start("mongod.exe"). It works but sometimes it takes time to start and before it even starts the unit test tries to run and FAILS. Unit test fails and complains that the mongodb server is not running. What to do in such situation?

    Read the article

  • Development processes, the use of version control, and unit-testing

    - by ct01
    Preface I've worked at quite a few "flat" organizations in my time. Most of the version control policy/process has been "only commit after it's been tested". We were constantly committing at each place to "trunk" (cvs/svn). The same was true with unit-testing - it's always been a "we need to do this" mentality but it never really materializes in a substantive form b/c there is no institutional knowledge base to do it - no mentorship. Version Control The emphasis for version control management at one place was a very strict protocol for commit messages (format & content). The other places let employees just do "whatever". The branching, tagging, committing, rolling back, and merging aspect of things was always ill defined and almost never used. This sort of seems to leave the version control system in the position of being a fancy file-storage mechanism with a meta-data component that never really gets accessed/utilized. (The same was true for unit testing and committing code to the source tree) Unit tests It seems there's a prevailing "we must/should do this" mentality in most places I've worked. As a policy or standard operating procedure it never gets implemented because there seems to be a very ill-defined understanding about what that means, what is going to be tested, and how to do it. Summary It seems most places I've been to think version control and unit testing is "important" b/c the trendy trade journals say it is but, if there's very little mentorship to use these tools or any real business policies, then the full power of version control/unit testing is never really expressed. So grunts, like myself, never really have a complete understanding of the point beyond that "it's a good thing" and "we should do it". Question I was wondering if there are blogs, books, white-papers, or online journals about what one could call the business process or "standard operating procedures" or uses cases for version control and unit testing? I want to know more than the trade journals tell me and get serious about doing these things. PS: @Henrik Hansen had a great comment about the lack of definition for the question. I'm not interested in a specific unit-testing/versioning product or methodology (like, XP) - my interest is more about work-flow at the individual team/developer level than evangelism. This is more-or-less a by product of the management situation I've operated under more than a lack of reading software engineering books or magazines about development processes. A lot of what I've seen/read is more marketing oriented material than any specifically enumerated description of "well, this is how our shop operates".

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Is this method of writing Unit Tests correct?

    - by aspdotnetuser
    I have created a small C# project to help me learn how to write good unit tests. I know that one important rule of unit testing is to test the smallest 'unit' of code possible so that if it fails you know exactly what part of the code needs to fixed. I need help with the following before I continue to implement more unit tests for the project: If I have a Car class, for example, that creates a new Car object which has various attributes that are calculated when its' constructor method is called, would the two following tests be considered as overkill? Should there be one test that tests all calculated attributes of the Car object instead? [Test] public void CarEngineCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.GreaterOrEqual(car.Engine, 1); } [Test] public void CarNameCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.IsNotNull(car.Name); } Should I have the above two test methods to test these things or should I have one test method that asserts the Car object has first been created and then test these things in the same test method?

    Read the article

  • unit testing variable state explicit tests in dynamically typed languages

    - by kris welsh
    I have heard that a desirable quality of unit tests is that they test for each scenario independently. I realised whilst writing tests today that when you compare a variable with another value in a statement like: assertEquals("foo", otherObject.stringFoo); You are really testing three things: The variable you are testing exists and is within scope. The variable you are testing is the expected type. The variable you are testing's value is what you expect it to be. Which to me raises the question of whether you should test for each of these implicitly so that a test fail would occur on the specific line that tests for that problem: assertTrue(stringFoo); assertTrue(stringFoo.typeOf() == "String"); assertEquals("foo", otherObject.stringFoo); For example if the variable was an integer instead of a string the test case failure would be on line 2 which would give you more feedback on what went wrong. Should you test for this kind of thing explicitly or am i overthinking this?

    Read the article

  • Are unit tests really used as documentation?

    - by stijn
    I cannot count the number of times I read statements in the vein of 'unit tests are a very important source of documentation of the code under test'. I do not deny they are true. But personally I haven't found myself using them as documentation, ever. For the typical frameworks I use, the method declarations document their behaviour and that's all I need. And I assume the unit tests backup everything stated in that documentation, plus likely some more internal stuff, so on one side it duplicates the ducumentation while on the other it might add some more that is irrelevant. So the question is: when are unit tests used as documentation? When the comments do not cover everything? By developpers extending the source? And what do they expose that can be useful and relevant that the documentation itself cannot expose?

    Read the article

  • Advancing Code Review and Unit Testing Practice

    - by Graviton
    As a team lead managing a group of developers with no experience ( and see no need) in code review and unit testing, how can you advance code review and unit testing practice? How are you going to create a way so that code review and unit testing to naturally fit into the developer's flow? One of the resistance of these two areas is that "we are always tight on dateline, so no time for code review and unit testing". Another resistance for code review is that we currently don't know how to do it. Should we review the code upon every check-in, or review the code at a specified date?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >