Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 135/547 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • How to RowTest with MSTest ?

    - by dr. evil
    I know that MSTest doens't support RowTest and similar tests. What MSTests users do? How is it possible to live without RowTest support? I've seen DataDriven test features but sounds like too much overhead, is there any 3rd patch or tool which allow me to do RowTest similar tests in MSTest ?

    Read the article

  • Increasing coverage with try-except-finally and a context-manager

    - by Daan Timmer
    This is the flow that I have in my program 277: try: 278: with open(r"c:\afile.txt", "w") as aFile: ...: pass # write data 329: except IOError as ex: ...: print ex 332: finally: 333: if os.path.exists(r"c:\afile.txt"): 334: shutil.copy(r"c:\afile.txt", r"c:\dest.txt") I've got all paths covered except for from line 278 to line 333 I got a normal happy-flow. I stubbed __builtin__.open to raise IOError when the open is called with said file name But how do I go from 278 to 333. Is this even possible? Additional information: - using coverage.py 3.4 (only listing 3.5, we can't currently upgrade to 3.5)

    Read the article

  • How do I unit test a finalizer?

    - by GraemeF
    I have the following class which is a decorator for an IDisposable object (I have omitted the stuff it adds) which itself implements IDisposable using a common pattern: public class DisposableDecorator : IDisposable { private readonly IDisposable _innerDisposable; public DisposableDecorator(IDisposable innerDisposable) { _innerDisposable = innerDisposable; } #region IDisposable Members public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } #endregion ~DisposableDecorator() { Dispose(false); } protected virtual void Dispose(bool disposing) { if (disposing) _innerDisposable.Dispose(); } } I can easily test that innerDisposable is disposed when Dispose() is called: [Test] public void Dispose__DisposesInnerDisposable() { var mockInnerDisposable = new Mock<IDisposable>(); new DisposableDecorator(mockInnerDisposable.Object).Dispose(); mockInnerDisposable.Verify(x => x.Dispose()); } But how do I write a test to make sure innerDisposable does not get disposed by the finalizer? I want to write something like this but it fails, presumably because the finalizer hasn't been called by the GC thread: [Test] public void Finalizer__DoesNotDisposeInnerDisposable() { var mockInnerDisposable = new Mock<IDisposable>(); new DisposableDecorator(mockInnerDisposable.Object); GC.Collect(); mockInnerDisposable.Verify(x => x.Dispose(), Times.Never()); }

    Read the article

  • XCode 3.2 does not mark unit test assert failures in the editor

    - by Cliff
    I've been off in Java land for about a month or so and now, upon returning to XCode I feel lost. I've upgraded 1st to 3.1.2 then recently to 3.2 and also got a new Mac with Snow Leopard so I'm not exactly sure when the problem surfaced. I just know that I used to get little red bubbles in my unit test next to the failing asserts and that no longer seems to happen. Is there a way to restore this? I've been trying to use Apple's own SenTesting framework instead of GoogleTools for mac like I used to. Should I revert to Google Tools? Does anyone have an answer?

    Read the article

  • Embedded systems code with good unit tests?

    - by rmk
    I am looking at approaches to Unit Test embedded systems code written in C. At the same time, I am also looking for a good UT framework that I can use. The framework should have a reasonably small number of dependencies. Any great Open-source products that have good UTs?

    Read the article

  • How to not pass around the container when using IoC in Winforms

    - by L2Type
    I'm new to the world of IoC and having a problem with implementing it in a Winforms application. I have an extremely basic application Winform application that uses MVC, it is one controller that does all the work and a working dialog (obviously with a controller). So I load all my classes in to my IoC container in program.cs and create the main form controller using the container. But this is where I am having problems, I only want to create the working dialog controller when it's used and inside a using statement. At first I passed in the container but I've read this is bad practice and more over the container is a static and I want to unit test this class. So how do you create classes in a unit test friendly way without passing in the container, I was considering the abstract factory pattern but that alone would solve my problem without using the IoC. I'm not using any famous framework, I borrowed a basic one from this blog post http://www.kenegozi.com/Blog/2008/01/17/its-my-turn-to-build-an-ioc-container-in-15-minutes-and-33-lines.aspx How do I do this with IoC? Is this the wrong use for IoC?

    Read the article

  • How to test custom template tags in Django?

    - by Mark Lavin
    I'm adding a set of template tags to a Django application and I'm not sure how to test them. I've used them in my templates and they seem to be working but I was looking for something more formal. The main logic is done in the models/model managers and has been tested. The tags simply retrieve data and store it in a context variable such as {% views_for_object widget as views %} """ Retrieves the number of views and stores them in a context variable. """ # or {% most_viewed_for_model main.model_name as viewed_models %} """ Retrieves the ViewTrackers for the most viewed instances of the given model. """ So my question is do you typically test your template tags and if you do how do you do it?

    Read the article

  • abstract test case using python unittest

    - by gruszczy
    Is it possible to create an abstract TestCase, that will have some test_* methods, but this TestCase won't be called and those methods will only be used in subclasses? I think I am going to have one abstract TestCase in my test suite and it will be subclassed for a few different implementation of a single interface. This is why all test methods are the some, only one, internal method changes. How can I do it in elegant way?

    Read the article

  • How to test a project with multiple python versions in a sequential way?

    - by ecolell
    I am developing a python adapter to interact with a 3rd party website, without any json or xml api (http://www.class.noaa.gov/). I have a problem when Travis CI run multiple python tests (of the The Travis CI Build Matrix) concurrently. The project is on GitHub at ecolell/noaaclass and the .travis.yml file is: language: python python: - "2.6" - "2.7" - "3.2" - "3.3" install: - "make deploy" script: "make test-coverage-travis-ci" #nosetests after_success: - "make test-coveralls" Specifically, I have a problem when at least 2 python versions were running their unit tests at the same time, because they use the same account of a website. Is there any option to specify to The Build Matrix the execution of each python version in a secuential way? Or maybe, Is there a better way to do this?

    Read the article

  • Loading SQL dump before running Django tests

    - by knutin
    I have a fairly complex Django project which makes it hard/impossible to use fixtures for loading data. What I would like to do is to load a database dump from the production database server after all tables has bene created by the testrunner and before the actual tests start running. I've tried various "magic" in MyTestCase.setUp(), but with no luck. Any suggestions would be most welcome. Thanks.

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • When should I stub out a type by manually creating a "stub" version, rather than using a mocking fra

    - by Ben Aston
    Are there any circumstances where it is favourable to manually create a stub type, as opposed to using a mocking framework (such as Rhino Mocks) at the point of test. We take both these approaches in our projects. My gut feel when I look at the long list of stub versions of objects is that it will add maintenance overhead, and moves the implementation of the stub away from the point of test.

    Read the article

  • eclEmma - full code coverage on class header?

    - by Fork
    Hi, I have a class that starts with: public class GeneralID implements WritableComparable<GeneralID>{ ... } And another that is: public class LineValuesMapper<KI, VI, KO, VO> extends Mapper<LongWritable, Text, Text, IntWritable>{ ... } All methods in these classes are covered. But not their header. The header of both classes gets painted as yellow with EclEmma. Is there anything I can do to fully cover the class header?

    Read the article

  • Using Moq callbacks correctly according to AAA

    - by Hadi Eskandari
    I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking. to be able to verify bounded object in the ViewModel is converted properly, I've used a callback: [Test] public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter() { var vm = CreateNewCampaignViewModel(); var proposal = CreateNewProposal(1, "New Proposal"); Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) => { Assert.That(p.plainProposal, Is.Not.Null); Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1)); Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal")); }); proposal.State = ObjectStates.Added; vm.CurrentProposal = proposal; vm.Save(); } It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?

    Read the article

  • Getting HTTP 406 when trying to test facebooker application with cucumber

    - by Waseem
    I am trying to test facebook api calls with cucumber. Here is the code. # app/controller/facebook_users_controller.rb class FacebookUsersController < ApplicationController def create fb_user = facebook_session.user user = User.new(:facebook_uid => fb_user.uid, :facebook_session_key => facebook_session.session_key respond_to do |format| if user.save format.json { render :json => { :status => 'ok' }.to_json } end end end end # features/steps/facebook_connect_step.rb Given /^I am a facebook connected user$/ do mock_session = Facebooker::MockSession.create post('/facebook_user.json') puts response.code end When I run the cucumber step for above step definition, I get a response code of 406 instead of 200. This happens in the cucumber test environment only and not in the browser(development/production).

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • Nielson's usablity scale

    - by Banderdash
    Just wondering if anyone out there knows of a standard survey (preferably based off Jacob Nielson's work on usability) that web admin's can administer to test groups for usability? I could just make up my own but I feel there as got to be some solid research out there on the sort of judgments on tasks I should be asking. For example Q:: Ask user to find profile page Do I ... A.) Present them with standard likert scale after each question B.) Present them the likert after all the questions .. Then what should that likert be, I know Nielson's usability judgments scale is based on Learnability, Efficiency of Use, Memorability, Error Rate, Satisfaction but I can only imagine a likert I would design that would effectively measure satisfaction...how am I suppose to ask a user to rank the Memorability of a site after one use on a 1-5 scale? Surely someone has devised a good way to pose the question?

    Read the article

  • How do I test an image alt value using capybara?

    - by stayce
    I'm trying to define a step to test the value of alt text of an image using capybara and the css selectors. I wrote one for input values based on the readme examples: Then /^I should see a value of "([^\"])" within the "([^\"])" input$/ do |input_value, input_id| element_value = locate("input##{input_id}").value element_value.should == input_value end But I can't figure this one out...something like: Then /^I should see the alttext "([^\"]*)"$/ do | alt_text | element_value = locate("img[alt]").value Anyone know how I can locate the alt text value?

    Read the article

  • Mocking inter-method dependencies

    - by Zecrates
    I've recently started using mock objects in my tests, but I'm still very inexperienced with them and unsure of how to use them in some cases. At the moment I'm struggling with how to mock inter-method dependencies (calling method A has an effect on the results of method B), and whether it should even be mocked (in the sense of using a mocking framework) at all? Take for example a Java Iterator? It is easy enough to mock the next() call to return the correct values, but how do I mock hasNext(), which depends on how many times next() has been called? Currently I'm using a List.Iterator as I could find no way to properly mock one. Does Martin Fowler's distinction between mocks and stubs come into play here? Should I rather write my own IteratorMock? Also consider the following example. The method to be tested calls mockObject.setX() and later on mockObject.getX(). Is there any way that I can create such a mock (without writing my own) which will allow the returned value of getX to depend on what was passed to setX?

    Read the article

  • missing value true / false: error in loop not in one-off

    - by vincent hay
    I am new on R and I have a problem with a test in a loop that I want to code. With a data frame (tabetest) like the one here after: Date 25179M103 1 14977 77.7309 2 14978 77.2567 3 14979 77.7507 I have: if(tabetest[3,"Date"]-tabetest[1,"Date"]1){print("ok")} [1] "ok" But: j=1 > position = 1 > price=tabetest for (i in 1:nrow(tabetest)-position){if(tabetest[i+position,"Date"]-tabetest[position,"Date"]>20){price[i+position,j]=price[i+position,j]/price[position,j]-1};position=position+1} Returns an error. R says that there is a missing value where true/false is required in: if (tabetest[i + position, "Date"] - tabetest[position, "Date"] > I have spent quite some time on that error but still don't understand where it comes from. Thanks for your help, Vincent

    Read the article

  • Zend_Test_PHPUnit_ControllerTestCase: Test view parameters and not rendered output

    - by erenon
    Hi, I'm using Zend_Test_PHPUnit_ControllerTestCase to test my controllers. This class provides various ways to test the rendered output, but I don't want to get my view scripts involved. I'd like to test my view's vars. Is there a way to access to the controllers view object? Here is an example, what I'm trying to do: <?php class Controller extends Zend_Controller_Action { public function indexAction() { $this-view->foo = 'bar'; } } class ControllerTest extends Zend_Test_PHPUnit_ControllerTestCase { public function testShowCallsServiceFind() { $this->dispatch('/controller'); //doesn't work, there is no such method: $this->assertViewVar('foo', 'bar'); //doesn't work: $this-assertEquals( 'bar', $this->getView()->foo ); } }

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >