Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 18/192 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Running NUnit Tests from Code

    - by Dror Helper
    I'm trying to write a simple method that receives a file an runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing?

    Read the article

  • How to Run NUnit Tests from C# Code

    - by Dror Helper
    I'm trying to write a simple method that receives a file and runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing? Is there some other way to run the test from the code (without using Process.Start)?

    Read the article

  • Spring and JUnit annotated tests: creating fixtures in separate transactions

    - by Francois
    I am testing my Hibernate DAOs with Spring and JUnit. I would like each test method to start with a pre-populated DB, i.e. the Java objects have been saved in the DB, in a Hibernate transaction that has already been committed. How can I do this? With @After and @Before, methods execute in the same Hibernate transaction as the methods decorated with @Test and @Transactional (first level cache may not be flushed by the time the real test method starts). @BeforeTransaction and @AfterTransaction apparently cannot work with Hibernate because they don't create transactions even if the method is annotated with @Transactional in addition to @Before/AfterTransaction. Any suggestion?

    Read the article

  • Custom annotations to configure tests

    - by ace
    First of al let me start off by saying I think custom annotations can be used for this but i'm not totally sure. I would like to have a set of annotations that I can decorate some test classes with. The annotations would allow me to configure the test for different environments. Example: public class Atest extends BaseTest{ private String env; @Login(environment=env) public void testLogin(){ //do something } @SignUp(environment=env) public void testSignUp(){ //do something } } The idea here would be that the login annotation would then be used to lookup the username and password to be used in the testLogin method for testing a login process for a particular environment. So my question(s) is this possible to do with annotations? If so I have not been able to find a decent howto online to do something like this. Everything out there seems to be your basic here's how to do your custom annotations and a basic processor but I haven't found anything for a situation like this. Ideas?

    Read the article

  • Constructing mocks in unit tests

    - by Flynn1179
    Is there any way to have a mock constructed instead of a real instance when testing code that calls a constructor? For example: public class ClassToTest { public void MethodToTest() { MyObject foo = new MyObject(); Console.WriteLine(foo.ToString()); } } In this example, I need to create a unit test that confirms that calling MethodToTest on an instance of ClassToTest will indeed output whatever the result of the ToString() method of a newly created instance of MyObject. I can't see a way of realistically testing the 'ClassToTest' class in isolation; testing this method would actually test the 'myObject.ToString()' method as well as the MethodToTest method.

    Read the article

  • How to write automated tests for SQL queries?

    - by James
    The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits. The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful. I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct. I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve. I am happy to add more information if required, add a comment if necessary. Thank you. Edit: I am using c#

    Read the article

  • How to manage IoC containers in tests?

    - by frosty
    I'm very new to testing and IoC containers and have two projects: MySite.Website (MVC) MySite.WebsiteTest Currently I have an IoC container in my website. Should I recreate another IoC container for my test? Or is there a way to use the IoC in both?

    Read the article

  • Python unittest: Generate multiple tests programmatically?

    - by Rosarch
    I have a function to test, under_test, and a set of expected input/output pairs: [ (2, 332), (234, 99213), (9, 3), # ... ] I would like each one of these input/output pairs to be tested in its own test_* method. Is that possible? This is sort of what I want, but forcing every single input/output pair into a single test: class TestPreReqs(unittest.TestCase): def setUp(self): self.expected_pairs = [(23, 55), (4, 32)] def test_expected(self): for exp in self.expected_pairs: self.assertEqual(under_test(exp[0]), exp[1]) if __name__ == '__main__': unittest.main()

    Read the article

  • MSTest Not Finding New Tests

    - by Blake Blackwell
    Using VS2010, I can't seem to add additional test methods. If I set up my project like this [TestMethod] public void Test1() { Assert.AreNotEqual(0,1); } [TestMethod] public void Test2() { Assert.AreNotEqual(0,1); } The only test that shows up in my Test View is Test1. How do I make sure Test2 gets in to that list?

    Read the article

  • Junit test that creates other tests

    - by Benju
    Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception. Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test. I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files. Any suggestions?

    Read the article

  • Setup database for Unit tests with Spring, Hibernate and Spring Transaction Support

    - by Michael Bulla
    I want to test integration of dao-layer with my service-layer in a unit-test. So I need to setup some data in my database (hsql). For this setup I need an own transaction at the begining of my testcase to ensure that all my setup is really commited to database before starting my testcase. So here's what I want to achieve: // NotTranactional public void doTest { // transaction begins setup database // commit transaction service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } Here is my not working code: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"/asynchUnit.xml"}) @DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD) public class ReceiptServiceTest implements ApplicationContextAware { @Autowired(required=true) private UserHome userHome; private ApplicationContext context; @Before @Transactional(propagation=Propagation.REQUIRED) public void init() throws Exception { User user = InitialCreator.createUser(); userHome.persist(user); } @Test public void testDoSomething() { ... } } Leading to this exception: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:687) at de.diandan.asynch.modell.GenericHome.getSession(GenericHome.java:40) at de.diandan.asynch.modell.GenericHome.persist(GenericHome.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:318) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:196) at $Proxy28.persist(Unknown Source) at de.diandan.asynch.service.ReceiptServiceTest.init(ReceiptServiceTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) I dont know whats the right way to get the transaction around setup database. What I tried: @Before @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } - Spring seems not to start transaction in @Before-annotated methods. Beyond that, thats not what I really want, cause there are a lot merhods in my testclass which needs a slightly differnt setup, so I need several of that init-methods. @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } public void doTest { init(); service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- init seems not to get started in transaction What I dont want to do: public void doTest { // doing my own transaction-handling setup database // doing my own transaction-handling service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- start mixing springs transaction-handling and my own seems to get pain in the ass. @Transactional(propagation=Propagation.REQUIRED) public void doTest { setup database service.doStuff() } -- I want to test as real as possible situation, so my service should start with a clean session and no transaction opened So whats the right way to setup database for my testcase?

    Read the article

  • Managing test data for Junit tests.

    - by nobody
    Hi, We are facing one problem in managing test data(xmls which is used to create mock objects). The data which we have currently has been evolved over a long period of time. Each time we add a new functionality or test case we add new data to test that functionality. Now, the problem is when the business requirement changes the format( like length or format of a variable) or any change which the test data doesn't support , we need to change the entire test data which is 100s of MBs in size. Could anyone suggest a better method or process to overcome this problem? Any suggestion would be appreciated.

    Read the article

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • Where can I find the Visual Studio 2010 RTM release notes?

    - by Lernkurve
    Question Where can I find a list of changes introduced to Visual Studio 2010 Ultimate RTM that were added since Visual Studio 2010 Ultimate RC? In fact, I'm only interested in changes related to MS Test Manager 2010 and Coded UI Tests. Where I have looked so far I have searched the Internet, looked for a readme.txt in the installation folder, looked into the Visual Studio help (F1) and browsed the "What's new in Visual Studio 2010" section on MSDN. No luck. Found Scott Guthrie's blog post Visual Studio 2010 and .NET 4 Released, but that's not exactly what I am looking for. It's not a changelog since VS2010RC. I suppose there is no such file because they made too many changes to document and hand out to end users. But if there was, I'd be glad if someone could point me to it. Thanks.

    Read the article

  • Selenium Grid not always using all of its registered RC's, why?

    - by BenA
    My Selenium Grid setup is as follows (all VMs) VM1 - Windows 7 x64 - Grid Hub + 2 RCs registering the default *firefox environment VM2 - Windows XP x32 - 2 RCs registering the default *firefox environment VM3 - Windows XP x32 - 2 RCs registering the default *firefox environment I'm happily using Mbunit and Gallio to drive the Grid, but my problem is that sometimes the Grid hub will stop passing executions over to 1 or more of the RCs, despite their showing available on the hub console. They seem to be happily maintaining their heartbeat back to the hub, but they're never asked to do any more work. This is after they had been executing tests earlier in the test run. Does anybody have any ideas why this should happen? In every case I've observed this behaviour, the last test an RC executed, before it then seemingly gets ignored by the hub, passed, and the session was successfully closed. Interestingly, whenever it happens to more than 1 of the RCs, its always (so far) been the pair that are running on the same VM. Yet they're managing to maintain their heartbeat, so it isn't a network connectivity problem. Any help would be greatly appreciated!

    Read the article

  • Automated testing of a website for IE7 javascript errors?

    - by Andreas Bonini
    This week I decided to add a new element to a javascript array by copying a similar one from a previous line; unfortunately I forgot to remove the comma so the end result was something like var a = [1, 2, 3,]. The code went live late Friday afternoon just before everyone left for the week-end, and it completely broke everything in Internet Explorer 7 (and lower I assume) since it's such a great browser. Since there was no one to read emails (week-end) it went unnoticed for quite a while, and I really don't want something like this to happen again (especially in my code).. This is not the first of weird IE7 problems; I was wondering if there was a way to automatically test key pages looking for javascript or css errors, or really anything that IE8 would output in its new console in development tools. If there isn't, what do you usually do? You test the website after every change with all the browsers you support? (Something I'll do from now, at least for IE, if there is no way to run automated tests)

    Read the article

  • launching java test bycommand line

    - by lamisse
    I created runner.bat to launch one java test it contains : path to java,classpath org.junit.runner.JUnitCore package.class when I launch it : FAILURES Tests run: 1, Failures: 1 Exception in thread "Thread-0" java.lang.IllegalStateException: Shutdown in progress at java.lang.ApplicationShutdownHooks.add(Unknown Source) at java.lang.Runtime.addShutdownHook(Unknown Source) at com.sun.imageio.stream.StreamCloser$2.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at com.sun.imageio.stream.StreamCloser.addToQueue(Unknown Source) at javax.imageio.stream.FileCacheImageInputStream.<init>(Unknown Source) at com.sun.imageio.spi.InputStreamImageInputStreamSpi.createInputStreamInstance(Unknown Source) at javax.imageio.ImageIO.createImageInputStream(Unknown Source) at javax.imageio.ImageIO.read(Unknown Source) at com.polyspace.util.guicomponent.CompositePanel.setBufferedImage(Unknown Source) at com.polyspace.util.guicomponent.CompositePanel.<init>(Unknown Source)

    Read the article

  • Seeking recommendations on automated test framework for C

    - by Hissohathair
    I'm writing some code (some of which uses W3C's libwww) in C. It's been a while since I've touched ANSI C. Back in the day we rolled our own test framework. Does anybody here have any test frameworks that they recommend for C programming? Googling around I was inclined to go with Check. It has a page on other unit testing frameworks in C, a few of which I've taken a quick look at. GNU AutoUnit seemed like it might be a good choice since I'm using the GNU build tools (autoconf, automake) but it doesn't look that alive... Another option would be to use a C++ framework and just write my tests in C++ Anyway, any experienced opinions would be appreciated. Thanks.

    Read the article

  • "rake test" doesn't load fixtures?

    - by Pavel K.
    when i run rake test --trace here's what happens ** Invoke test (first_time) ** Execute test ** Invoke test:units (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:abort_if_pending_migrations ** Execute db:test:prepare ** Invoke db:test:load (first_time) ** Invoke db:test:purge (first_time) ** Invoke environment ** Execute db:test:purge ** Execute db:test:load ** Invoke db:schema:load (first_time) ** Invoke environment ** Execute db:schema:load ** Execute test:units /usr/bin/ruby1.8 -I"lib:test".... (and after that fails because there's no fixtures loaded) why doesn't it load fixtures (i thought that would be default behaviour) and how do i make it load fixtures before executing tests??? p.s. my test/test_helper.rb content is: ENV["RAILS_ENV"] = "test" require File.expand_path(File.dirname(__FILE__) + "/../config/environment") require 'test_help' class ActiveSupport::TestCase self.use_transactional_fixtures = true self.use_instantiated_fixtures = false fixtures :all end (rails 2.3.4)

    Read the article

  • Stop MSVC++ debug errors from blocking the current process?

    - by Mike Arthur
    Any failed ASSERT statements on Windows cause the below debug message to appear and freeze the applications execution. I realise this is expected behaviour but it is running periodically on a headless machine so prevent the unit tests from failing, instead waiting on user input indefinitely. Is there s a registry key or compiler flag I can use to prevent this message box from requesting user input whilst still allowing the test to fail under ASSERT? Basically, I want to do this without modifying any code, just changing compiler or Windows options. Thanks!

    Read the article

  • How to programmatically start a WPF application from a unit test?

    - by Lernkurve
    Problem VS2010 and TFS2010 support creating so-called Coded UI Tests. All the demos I have found, start with the WPF application already running in the background when the Coded UI Test begins or the EXE is started using the absolute path to it. I, however, would like to start my WPF application under test from the unit test code. That way it'll also work on the build server and on my peer's working copies. How do I accomplish that? My discoveries so far a) This post shows how to start a XAML window. But that's not what I want. I want to start the App.xaml because it contains XAML resources and there is application logic in the code behind file. b) The second screenshot on this post shows a line starting with ApplicationUnterTest calculatorWindow = ApplicationUnderTest.Launch(...); which is conceptually pretty much what I am looking for, except that again this example uses an absolute path the the executable file. c) A Google search for "Programmatically start WPF" didn't help either.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >