Search Results

Search found 10595 results on 424 pages for 'ab testing'.

Page 7/424 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Dog-1 testing as a synonym for integration / system test

    - by SkeetJon
    In the company I'm working for the phrase Dog-1 is used for the software testing scenario where the first piece of data passes through the complete system. Has anyone heard this phrase in a similar context? There's nothing on Google about it in a software development context, so I'm guessing its a idiosyncrasy of my current environment. Is there a recognized phrase in the industry for this kind of test? I don't think it's simply a 'system test' / 'integration test' as 'Dog-1' refers specifically to the first item/unit through the system.

    Read the article

  • unit testing on ARM

    - by NomadAlien
    We are developing application level code that runs on an ARM processor. The BSP (low level code) is being delivered by a 3d party so our code sits just on top of this abstraction layer (code is written in c++). To do unit testing, I assume we will have to mock/stub out the BSP library(essentially abstracting out the HW), but what I'm not sure of is if I write/run the unit test on my pc, do I compile it with for example GCC? Normally we use Realview compiler to compile our code for the ARM. Can I assume that if I compile and run the code with x86 compiler and the unit tests pass that it will also pass when compiled with RealView compiler? I'm not sure how much difference the compiler makes and if you can trust that if the x86 compiled code pass the unit tests that you can also be confident that the Realview compiled code is ok.

    Read the article

  • Testing controller logic that uses ISession directly

    - by Rippo
    I have just read this blog post from Jimmy Bogard and was drawn to this comment. Where this falls down is when a component doesn’t support a given layering/architecture. But even with NHibernate, I just use the ISession directly in the controller action these days. Why make things complicated? I then commented on the post and ask this question:- My question here is what options would you have testing the controller logic IF you do not mock out the NHibernate ISession. I am curious what options we have if we utilise the ISession directly on the controller?

    Read the article

  • Cross-Platform Automated Mobile Application UI Testing

    - by thetaspark
    My dissertation is about developing a tool for testing mobile applications from the GUI. Primary device is Android, but it should support Blackberry or iOs etc. I found some frameworks using Google, e.g MonkeyTalk. I am not so sure, but what I want to develop might be a mini MonkeyTalk, with minimal functionality, focused on the GUI of the application(s) to be tested. My questions: What framework(s)? I am good with Java. Can I use the xUnit family for this, and how? What should I be reading/studying? Any suggestions, links for tutorials, documentations, howtos, etc., would be very helpful. Thanks in advance.

    Read the article

  • Improve Bad testing

    - by SetiSeeker
    We have a large team of developers and testers. The ratio is one tester for every one developer. We have full bug tracking and reporting systems in place. We have test plans in place. Every change to the product, the testing team is involved in the design of the feature and are included in the development process as much as possible. We build in small iterative blocks, using scrum methodology and every scrum they are included in, including the grooming sessions etc. But every release of the product, they miss even the most simple and obvious defects. How can we improve this?

    Read the article

  • Has anyone used Genetify for A/B testing

    - by Joshak
    I'm working on a small project that will likely run on Wordpress, I always like to run some split testing to improve conversion rates for various goals. Typically if its a small site that I either don't have a budget for or want to keep it as inexpensive as possible I use Google Website Optimizer if I do have a budget I go with Visual Website Optimizer both are great and affordable, but for fun I was checking out alternatives and found Genetify which is an open source project and has some neat features. In searching around I don't see many people talking about it and wondered if anyone here has used it. If so what do you think about it?

    Read the article

  • Testing complex compositions

    - by phlipsy
    I have a rather large collection of classes which check and mutate a given data structure. They can be composed via the composition pattern into arbitrarily complex tree-like structures. The final product contains a lot of these composed structures. My question is now: How can I test those? Albeit it is easy to test every single unit of these compositions, it is rather expensive to test the whole compositions in the following sense: Testing the correct layout of the composition-tree results in a huge number of test cases Changes in the compositions result in a very laborious review of every single test case What is the general guideline here?

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • Zend Framework: How to start PHPUnit testing Forms?

    - by Andrew
    I am having trouble getting my filters/validators to work correctly on my form, so I want to create a Unit test to verify that the data I am submitting to my form is being filtered and validated correctly. I started by auto-generating a PHPUnit test in Zend Studio, which gives me this: <?php require_once 'PHPUnit/Framework/TestCase.php'; /** * Form_Event test case. */ class Form_EventTest extends PHPUnit_Framework_TestCase { /** * @var Form_Event */ private $Form_Event; /** * Prepares the environment before running a test. */ protected function setUp () { parent::setUp(); // TODO Auto-generated Form_EventTest::setUp() $this->Form_Event = new Form_Event(/* parameters */); } /** * Cleans up the environment after running a test. */ protected function tearDown () { // TODO Auto-generated Form_EventTest::tearDown() $this->Form_Event = null; parent::tearDown(); } /** * Constructs the test case. */ public function __construct () { // TODO Auto-generated constructor } /** * Tests Form_Event->init() */ public function testInit () { // TODO Auto-generated Form_EventTest->testInit() $this->markTestIncomplete( "init test not implemented"); $this->Form_Event->init(/* parameters */); } /** * Tests Form_Event->getFormattedMessages() */ public function testGetFormattedMessages () { // TODO Auto-generated Form_EventTest->testGetFormattedMessages() $this->markTestIncomplete( "getFormattedMessages test not implemented"); $this->Form_Event->getFormattedMessages(/* parameters */); } } so then I open up terminal, navigate to the directory, and try to run the test: $ cd my_app/tests/unit/application/forms $ phpunit EventTest.php Fatal error: Class 'Form_Event' not found in .../tests/unit/application/forms/EventTest.php on line 19 So then I add a require_once at the top to include my Form class and try it again. Now it says it can't find another class. I include that one and try it again. Then it says it can't find another class, and another class, and so on. I have all of these dependencies on all these other Zend_Form classes. What should I do? How should I go about testing my Form to make sure my Validators and Filters are being attached correctly, and that it's doing what I expect it to do. Or am I thinking about this the wrong way?

    Read the article

  • Problem with Unit testing of ASP.NET project (NullReferenceException when running the test)

    - by Alex
    Hi, I'm trying to create a bunch of MS visual studio unit tests for my n-tiered web app but for some reason I can't run those tests and I get the following error - "Object reference not set to an instance of an object" What I'm trying to do is testing of my data access layer where I use LINQ data context class to execute a certain function and return a result,however during the debugging process I found out that all the tests fail as soon as they get to the LINQ data context class and it has something to do with the connection string but I cant figure out what is the problem. The debugging of tests fails here(the second line): public EICDataClassesDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["EICDatabaseConnectionString"].ConnectionString, mappingSource) { OnCreated(); } And my test is as follows: TestMethod()] public void OnGetCustomerIDTest() { FrontLineStaffDataAccess target = new FrontLineStaffDataAccess(); // TODO: Initialize to an appropriate value string regNo = "jonh"; // TODO: Initialize to an appropriate value int expected = 10; // TODO: Initialize to an appropriate value int actual; actual = target.OnGetCustomerID(regNo); Assert.AreEqual(expected, actual); } The method which I call from DAL is: public int OnGetCustomerID(string regNo) { using (LINQDataAccess.EICDataClassesDataContext dataContext = new LINQDataAccess.EICDataClassesDataContext()) { IEnumerable<LINQDataAccess.GetCustomerIDResult> sProcCustomerIDResult = dataContext.GetCustomerID(regNo); int customerID = sProcCustomerIDResult.First().CustomerID; return customerID; } } So basically everything fails after it reaches the 1st line of DA layer method and when it tries to instantiate the LINQ data access class... I've spent around 10 hours trying to troubleshoot the problem but no result...I would really appreciate any help! UPDATE: Finally I've fixed this!!!!:) I dont know why but for some reasons in the app.config file the connection to my database was as follows: AttachDbFilename=|DataDirectory|\EICDatabase.MDF So what I did is I just changed the path and instead of |DataDirectory| I put the actual path where my MDF file sits,i.e C:\Users\1\Documents\Visual Studio 2008\Projects\EICWebSystem\EICWebSystem\App_Data\EICDatabase.mdf After I had done that it worked out!But still it's a bit not clear what was the problem...probably incorrect path to the database?My web.config of ASP.NET project contains the |DataDirectory|\EICDatabase.MDF path though..

    Read the article

  • Testing Finite State Machines

    - by Pondidum
    I have inherited a large and firaly complex state machine at work. It has 31 possbile states to be in. It has the following inputs: Enum: Current State (so 0 - 30) Enum: source (currently only 2 entries) Boolean: Request Boolean: type Enum: Status (3 states) Enum: Handling (3 states) Boolean: Completed The 31 States are really needed (big business process). Breaking into seperate state machines doesnt seem feasable - each state is distinct. I have written tests for one set of inputs (the most common set), with one test per input (all inputs constant, except for the State input): [Subject("Application Process States")] public class When_state_is_meeting2Requested : AppProcessBase { Establish context = () => { //Setup.... }; Because of = () => process.Load(jas, vac); It Current_node_should_be_meeting2Requested = () => process.CurrentNode.ShouldBeOfType<meetingRequestedNode>(); It Can_move_to_clientDeclined = () => Check(process, process.clientDeclined); It Can_move_to_meeting1Arranged = () => Check(process, process.meeting1Arranged); It Can_move_to_meeting2Arranged = () => Check(process, process.meeting2Arranged); It Can_move_to_Reject = () => Check(process, process.Reject); It Cannot_move_to_any_other_state = () => AllOthersFalse(process); } As no one is entirely sure on what the output should be for each state and set of inputs i have been starting to write tests for it, however on calculation i will need to write 4320 ( 30*2*2*2*3*3*2 ) tests for it. Does anyone have any suggestions on how i should go about testing this?

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Should adapters or wrappers be unit tested?

    - by m3th0dman
    Suppose that I have a class that implements some logic: public MyLogicImpl implements MyLogic { public void myLogicMethod() { //my logic here } } and somewhere else a test class: public MyLogicImplTest { @Test public void testMyLogicMethod() { /test my logic } } I also have: @WebService public MyWebServices class { @Inject private MyLogic myLogic; @WebMethod public void myLogicWebMethod() { myLogic.myLogicMethod(); } } Should there be a test unit for myLogicWebMethod or should the testing for it be handled in integration testing.

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • Is doing AB Tests using site redirection a bad practice?

    - by user40358
    I'm developing hotels websites here in Brazil. When the site is done, we do an AB test with the old version to measure conversion and show to the hotel owner how good our site is. Due to the fact that I cannot put the old site inside the new one as a subresource (newone.com/old), currently I'm doing those AB test as follows: 1) I create 2 Google Analytics accounts, one for each site (old and new); 2) I put the GA tags in the old website pages (changing its possibly existent GA ID to the just created one); 3) I put an Javascript code that redirects the user to the old website (in a different URL and different domain) with 50% of probability. So I compare all the metrics, events and goals between those two GA accounts. How bad is it? How Google can interpretate the fact of being, sometimes redirected, sometimes don't? The experiment usually runs for 2 weeks. Is there any other alternative for doing this in a better way?

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Testing sample code in python modules

    - by Andrew Walker
    I'm in the process of writing a python module that includes some samples. These samples aren't unit-tests, and they are too long and complex to be doctests. I'm interested in best practices for automatically checking that these samples run. My current project layout is pretty standard, except that there is an extra top level makefile that has build, install, unittest, coverage and profile targets, that delegate responsibility to setup.py and nose as required. projectname/ Makefile README setup.py samples/ foo-sample foobar-sample projectname/ __init__.py foo.py bar.py tests/ test-foo.py test-bar.py I've considered adding a sampletest module, or adding nose.tools.istest decorators to the entry-point functions of the samples, but for a small number of samples, these solutions sound a bit ugly. This question is similar to http://stackoverflow.com/questions/301365/automatically-unit-test-example-code, but I assume python best practices will differ from C#

    Read the article

  • What some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • Learning about tests for junior programmers

    - by RHaguiuda
    I`m not sure if its okay to ask it on stackoverflow. Ive been reading a log about tests, unit tests, tests frameworks, mocks and so on, but as a junior programmer I dont know anything about tests, not even where to start! Can anyone explain to young programmers about tests, how they`re run, where and what to test, what is unit testing, integration testing, automated tests? How much to test? And more important: how much test is enough? I belive this would be very helpfull. If possible indicate a few books too about these subjects. Thanks

    Read the article

  • Testing complex entities

    - by Carlos
    I've got a C# form, with various controls on it. The form controls an ongoing process, and there are many, many aspects that need to be right for the program to run correctly. Each part can be unit tested (for instance, loading some coefficients, drawing some diagnostics) but I often run into problems that are best described with an example: "If I click here, then here, then change this, then re-open the form, then click here, it crashes or produces an error" I've tried my best to use common code organisational ideas (inheritance, DRY, separation of concerns) but there never seems to be a way to test every single path, and inevitably, a form with several controls will have a huge number of ways to execute. What can I read (preferably online) that addresses this kind of issue, and is there a (non-generic) term for it. This isn't a specific problem I'm having, but one that creeps up on me, especially with WinForms.

    Read the article

  • How to get started with testing(jMock)

    - by London
    Hello, I'm trying to learn how to write tests. I'm also learning Java, I was told I should learn/use/practice jMock, I've found some articles online that help to certain extend like : http://www.theserverside.com/news/1365050/Using-JMock-in-Test-Driven-Development http://jeantessier.com/SoftwareEngineering/Mocking.html#jMock And most articles I found was about test driven development, write tests first then write code to make the test pass. I'm not looking for that at the moment, I'm trying to write tests for already existing code with jMock. The official documentation is vague to say the least and just too hard for me. Does anybody have better way to learn this. Good books/links/tutorials would help me a lot. thank you EDIT - more concrete question : http://jeantessier.com/SoftwareEngineering/Mocking.html#jMock - from this article Tried this to mock this simple class : import java.util.Map; public class Cache { private Map<Integer, String> underlyingStorage; public Cache(Map<Integer, String> underlyingStorage) { this.underlyingStorage = underlyingStorage; } public String get(int key) { return underlyingStorage.get(key); } public void add(int key, String value) { underlyingStorage.put(key, value); } public void remove(int key) { underlyingStorage.remove(key); } public int size() { return underlyingStorage.size(); } public void clear() { underlyingStorage.clear(); } } Here is how I tried to create a test/mock : public class CacheTest extends TestCase { private Mockery context; private Map mockMap; private Cache cache; @Override @Before public void setUp() { context = new Mockery() { { setImposteriser(ClassImposteriser.INSTANCE); } }; mockMap = context.mock(Map.class); cache = new Cache(mockMap); } public void testCache() { context.checking(new Expectations() {{ atLeast(1).of(mockMap).size(); will(returnValue(int.class)); }}); } } It passes the test and basically does nothing, what I wanted is to create a map and check its size, and you know work some variations try to get a grip on this. Understand better trough examples, what else could I test here or any other exercises would help me a lot. tnx

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >