Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 131/550 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • How to test flash.message in a Grails webflow?

    - by callie16
    I'm using webflows in Grails and I'm currently writing tests for it. Now, inside I've got something that throws an error so I set a message to the flash scope before redirecting: ... if (some_condition) { flash.message = "my error message" return error() } ... Now, I know that when I'm going to display this in the GSP page, I access the flash message as <g:if test="${message}">... instead of the usual <g:if test="${flash.message}">... So anyway, I'm writing my test and I'm wondering how to test the content of the message? Usually, in normal actions in the controllers, I follow what's written in here . However, since this is a webflow, I can't seem to find the message even if I check controller.flash.message / controller.params.message / controller.message . I've also tried looking at the flow scope... Any ideas on how to see the message then? Thanks a bunch!

    Read the article

  • From a shell script open a new tab in a specific instance of Firefox.

    - by toc777
    Hi everyone, I have a shell script that creates Firefox profiles and then uses them to open multiple instances of Firefox simultaneously. The problem is how can I open a URL in a particular instance of Firefox? I have tried firefox -CREATEPROFILE test firefox -P 'test' -no-remote firefox -P test -url www.google.ie But the last part which is trying to open the URL using the test profile does not work, it always opens in then default profile. Is there any way to tell Firefox from the command line to open a URL using a particular profile? Thanks.

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • How to unit test configs

    - by ForeverDebugging
    We're working with some very large config files which contain lots of Unity and WCF configuration. When we open some of these configs in the SVC config editor or even try to open a web application using these configs, we recieve errors showing any typos or errors. E.g. a WCF binding is invalid or does not exist etc, or a configuration section does not exist, two endding tags, etc. Is there some way to "valid" a config through a unit test? So there's one less thing which could go wrong when the application is moved into a new environment.

    Read the article

  • How to disable translations during unit tests in django?

    - by Denilson Sá
    I'm using Django Internationalization tools to translate some strings from my application. The code looks like this: from django.utils.translation import ugettext as _ def my_view(request): output = _("Welcome to my site.") return HttpResponse(output) Then, I'm writing unit tests using the Django test client. These tests make a request to the view and compare the returned contents. How can I disable the translations while running the unit tests? I'm aiming to do this: class FoobarTestCase(unittest.TestCase): def setUp(self): # Do something here to disable the string translation. But what? # I've already tried this, but it didn't work: django.utils.translation.deactivate_all() def testFoobar(self): c = Client() response = c.get("/foobar") # I want to compare to the original string without translations. self.assertEquals(response.content.strip(), "Welcome to my site.")

    Read the article

  • where to put the unittest for library in rails

    - by lidaobing
    Hello, I am a ruby and rails newbie. And I am working on a rails application with RadRails. RadRails has a "Switch to Test" function for my controller, model, etc. but not for my library. if I have class Foo::Bar in /lib/foo/bar.rb, where should I put the unittest for it? or should I separate the foo library in a separated project? Thanks.

    Read the article

  • xUnit false positive when comparing null terminated strings

    - by mr.b
    I've come across odd behavior when comparing strings. First assert passes, but I don't think it should.. Second assert fails, as expected... [Fact] public void StringTest() { string testString_1 = "My name is Erl. I am a program\0"; string testString_2 = "My name is Erl. I am a program"; Assert.Equal<string>(testString_1, testString_2); Assert.True(testString_1.Equals(testString_2)); } Any ideas?

    Read the article

  • Unit Test Event Handler

    - by Thomas Tran
    I got this event handle and how can I do unit test for this public class MyLearningEvent { private event EventHandler _Closed; public event EventHandler Closed { add { _Closed -= value; _Closed += value; } remove { _Closed -= value; } } public void OnClosed() { if (_Closed != null) _Closed(this, EventArgs.Empty); } } Just modified code so that much clear Thanks

    Read the article

  • mocking static method call to c# library class

    - by Joe
    This seems like an easy enough issue but I can't seem to find the keywords to effect my searches. I'm trying to unit test by mocking out all objects within this method call. I am able to do so to all of my own creations except for this one: public void MyFunc(MyVarClass myVar) { Image picture; ... picture = Image.FromStream(new MemoryStream(myVar.ImageStream)); ... } FromStream is a static call from the Image class (part of c#). So how can I refactor my code to mock this out because I really don't want to provide a image stream to the unit test.

    Read the article

  • How to unit tests functions which return results asyncronously in XCode?

    - by DevDevDev
    I have something like - (void)getData:(SomeParameter*)param { // Remotely call out for data returned asynchronously // returns data via a delegate method } - (void)handleDataDelegateMethod:(NSData*)data { // Handle returned data } I want to write a unit test for this, how can I do something better than NSData* returnedData = nil; - (void)handleDataDelegateMethod:(NSData*)data { returnedData = data; } - (void)test { [obj getData:param]; while (!returnedData) { [NSThread sleep:1]; } // Make tests on returnedData }

    Read the article

  • how to access objects in run-time in qtp?

    - by Onnesh
    We have a function which accesses two types of controls like button and list box in standard windows app. The function uses only the control name as arguments, so there is no way qtp could understand what type of control it is. how to resolve this? Write 2 separate functions- 1 for button & another for list box?

    Read the article

  • What's the best way to avoid try...catch...finally... in my unit tests?

    - by Bruce Li
    I'm writing many unit tests in VS 2010 with Microsoft Test. In each test class I have many test methods similar to below: [TestMethod] public void This_is_a_Test() { try { // do some test here // assert } catch (Exception ex) { // test failed, log error message in my log file and make the test fail } finally { // do some cleanup with different parameters } } When each test method looks like this I fell it's kind of ugly. But so far I haven't found a good solution to make my test code more clean, especially the cleanup code in the finally block. Could someone here give me some advices on this? Thanks in advance.

    Read the article

  • Is there a library available which easily can record and replay results of API calls?

    - by Billy ONeal
    I'm working on writing various things that call relatively complicated Win32 API functions. Here's an example: //Encapsulates calling NtQuerySystemInformation buffer management. WindowsApi::AutoArray NtDll::NtQuerySystemInformation( SystemInformationClass toGet ) const { AutoArray result; ULONG allocationSize = 1024; ULONG previousSize; NTSTATUS errorCheck; do { previousSize = allocationSize; result.Allocate(allocationSize); errorCheck = WinQuerySystemInformation(toGet, result.GetAs<void>(), allocationSize, &allocationSize); if (allocationSize <= previousSize) allocationSize = previousSize * 2; } while (errorCheck == 0xC0000004L); if (errorCheck != 0) { THROW_MANUAL_WINDOWS_ERROR(WinRtlNtStatusToDosError(errorCheck)); } return result; } //Client of the above. ProcessSnapshot::ProcessSnapshot() { using Dll::NtDll; NtDll ntdll; AutoArray systemInfoBuffer = ntdll.NtQuerySystemInformation( NtDll::SystemProcessInformation); BYTE * currentPtr = systemInfoBuffer.GetAs<BYTE>(); //Loop through the results, creating Process objects. SYSTEM_PROCESSES * asSysInfo; do { // Loop book keeping asSysInfo = reinterpret_cast<SYSTEM_PROCESSES *>(currentPtr); currentPtr += asSysInfo->NextEntryDelta; //Create the process for the current iteration and fill it with data. std::auto_ptr<ProcImpl> currentProc(ProcFactory( static_cast<unsigned __int32>(asSysInfo->ProcessId), this)); NormalProcess* nptr = dynamic_cast<NormalProcess*>(currentProc.get()); if (nptr) { nptr->SetProcessName(asSysInfo->ProcessName); } // Populate process threads for(ULONG idx = 0; idx < asSysInfo->ThreadCount; ++idx) { SYSTEM_THREADS& sysThread = asSysInfo->Threads[idx]; Thread thread( currentProc.get(), static_cast<unsigned __int32>(sysThread.ClientId.UniqueThread), sysThread.StartAddress); currentProc->AddThread(thread); } processes.push_back(currentProc); } while(asSysInfo->NextEntryDelta != 0); } My problem is in mocking out the NtDll::NtQuerySystemInformation method -- namely, that the data structure returned is complicated (Well, here it's actually relatively simple but it can be complicated), and writing a test which builds the data structure like the API call does can take 5-6 times as long as writing the code that uses the API. What I'd like to do is take a call to the API, and record it somehow, so that I can return that recorded value to the code under test without actually calling the API. The returned structures cannot simply be memcpy'd, because they often contain inner pointers (pointers to other locations in the same buffer). The library in question would need to check for these kinds of things, and be able to restore pointer values to a similar buffer upon replay. (i.e. check each pointer sized value if it could be interpreted as a pointer within the buffer, change that to an offset, and remember to change it back to a pointer on replay -- a false positive rate here is acceptable) Is there anything out there that does anything like this?

    Read the article

  • NHibernate - fast way to clear out database

    - by csetzkorn
    Hi, I intend to perform some automated integration tests. This requires the db to be put back into a 'clean state'. Is this the fastest/best way to do this: var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly("Bla"); new SchemaExport(cfg).Execute(false, true, false); Thanks. Christian

    Read the article

  • Automatic profiling visual studio 2008

    - by phil
    Is there a way to do automatic profiling in visual studio 2008? I know how the profiling works both from the command line and using the GUI in VS08. What I want to accomplish: After my nightly build I want to complete some profiling (instrumental) to see if some functions (will most likely always be the same) have changed in some negative way (or positive of course).

    Read the article

  • Unit Test Sessions Window Closes when debugging

    - by Daniel Dyson
    When I select an NUnit test in the Unit Test Sessions window and click debug, the window disappears. My breakpoints are hit, but if I hit F5, the Unit Test Sessions window does not return until the test returns a result or I stop the debugging session. This is preventing me from viewing any console output during tests. Any ideas?

    Read the article

  • What block is not being tested in my test method? (VS08 Test Framework)

    - by daft
    I have the following code: private void SetControlNumbers() { string controlString = ""; int numberLength = PersonNummer.Length; switch (numberLength) { case (10) : controlString = PersonNummer.Substring(6, 4); break; case (11) : controlString = PersonNummer.Substring(7, 4); break; case (12) : controlString = PersonNummer.Substring(8, 4); break; case (13) : controlString = PersonNummer.Substring(9, 4); break; } ControlNumbers = Convert.ToInt32(controlString); } Which is tested using the following test methods: [TestMethod()] public void SetControlNumbers_Length10() { string pNummer = "9999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length11() { string pNummer = "999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length12() { string pNummer = "199999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length13() { string pNummer = "1999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } For some reason Visual Studio says that I have 1 block that is not tested despite showing all code in the method under test in blue (ie. the code is covered in my unit tests). Is this because of the fact that I don't have a default value defined in the switch? When the SetControlNumbers() method is called, the string on which it operates have already been validated and checked to see that it conforms to the specification and that the various Substring calls in the switch will generate a string containing 4 chars. I'm just curious as to why it says there is 1 untested block. I'm no unit test guru at all, so I'd love some feedback on this. Also, how can I improve on the conversion after the switch to make it safer other than adding a try-catch block and check for FormatExceptions and OverflowExceptions?

    Read the article

  • Whats are basic things you should kept in mind while writing functional tests?

    - by piemesons
    Hello, what kind of functionality should be covered in functional test? How to prioritize what are the functionality that must be covered in functional test. I can understand it depends upon on the project, whats the functionality present the project. Lets take a example of stackoverflow. Suppose we are developing a basic working model of this. What kind of functional test must be covered in this. Just brief what the key points while writing functional tests. (taking any basic functionality from this site just to understand the reference) Still its platform independent question.I am using ruby on rails developer, but keeping ruby on rails as mind will be preferable.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >