Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 120/550 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Can I stop NUnit GUI test run from code under test?

    - by David White
    I'm using nunit.exe (v2.5.3, as it happens) for our testers to run UI tests of our web site, using WatiN. The regression test suite is up to around 100 tests. While the test suite is running, the test web site could go down for maintenance. It would be more efficient in those circumstances if the test suite would stop altogether, rather than attempting to run each remaining test, and failing. Is there any way to accomplish this?

    Read the article

  • Did test server port change in Rails 2.3?

    - by kareem
    I upgraded rails to 2.3.2 from 2.1.1 yesterday and a bunch of my tests started failing. When I was running under 2.1.1, the test server was running on port 3000 so I had a HOST_DOMAIN variable that included the port - HOST_DOMAIN = "localhost.tst:3000". This is so my assert_redirected_to's would succeed. Now, however, it seems that the test server is running on port 80, so the port in HOST_DOMAIN is causing tests to fail. There's no specific reason I'm keeping the port in HOST_DOMAIN. I more want to know whether something in Rails 2.3 changed the port the test server runs on and where I can read more about why. I've searched a ton and can't find anything, so I'm going to my go-to place to ask development questions :) Thanks in advance.

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • WAMP phpundercontrol installation guide / tutorial

    - by Shiro
    Our team just thinking to start using php unit test for our problem. I not able to find a complete tutorial or installation which is install phpundercontrol in WAMP environment, I have no any experience about php unit test, but we know we need it. Our goal is everyday we would like to build the project, so we know where is the bug happen. We also would like to learn more collaboration between the team. I would like to ask / someone to teach, how to start install phpundercontrol in WAMP environment, or some link might be help. I did some research, most of the page I found kind of outdated, the command they provided doesn't work for me. Thank you so much.

    Read the article

  • Have you been in cases where TDD increased development time?

    - by BillyONeal
    Hello everyone :) I was reading http://stackoverflow.com/questions/2512504/tdd-how-to-start-really-thinking-tdd and I noticed many of the answers indicate that tests + application should take less time than just writing the application. In my experience, this is not true. My problem though is that some 90% of the code I write has a TON of operating system calls. The time spent to actually mock these up takes much longer than just writing the code in the first place. Sometimes 4 or 5 times as long to write the test as to write the actual code. I'm curious if there are other developers in this kind of a scenario.

    Read the article

  • Putting BigDecimal data into HSQLDB test database using DbUnit

    - by Denise
    Hi everyone, I'm using Hibernate JPA in my backend. I am writing a unit test using JUnit and DBUnit to insert a set of data into an in-memory HSQL database. My dataset contains: <order_line order_line_id="1" quantity="2" discount_price="0.3"/> Which maps to an OrderLine Java object where the discount_price column is defined as: @Column(name = "discount_price", precision = 12, scale = 2) private BigDecimal discountPrice; However, when I run my test case and assert that the discount price returned equals 0.3, the assertion fails and says that the stored value is 0. If I change the discount_price in the dataset to be 0.9, it rounds up to 1. I've checked to make sure HSQLDB isn't doing the rounding and it definitely isn't because I can insert an order line object using Java code with a value like 5.3 and it works fine. To me, it seems like DBUtils is for some reason rounding the number I've defined. Is there a way I can force this to not happen? Can anyone explain why it might be doing this? Thanks!

    Read the article

  • Automating an application

    - by dacman
    I've always wondered the best way to automate use of a GUI in windows. When I was about 15, I wrote a little application that used some simple windows api functions to automatically click on certain locations on the screen based on a script. This could be used to automate GUI apps, but surely it's not the best way. So, my question is: What's the best way to automate use of a GUI in windows? Are there certain windows API functions that would be beneficial? If the program were to crash, how could you detect that? Thanks!

    Read the article

  • ExpectedException on TestMethod Visual Studio 2010

    - by Joop
    Today I upgraded my solution with all the underlying projects from VS2008 to VS2010. Everything went well except for my unit tests. First of all only the web projects had as target framework .NET 4. All the other projects still had .NET 3.5. I changed them all to .NET 4. Now when I debug my unit tests it breaks on every exception. In 2008 it just wouldn't pass and tell me that an exception occurred. Even when I have the ExpectedException attribute defined it stops debugging on every exception. And example of one of my tests: [TestMethod] [ExpectedException(typeof(EntityDoesNotExistException))] public void ConstructorTest() { AddressType type = new AddressType(int.MaxValue); } The EntityDoesNotExistException is a custom exception and inherits Exception.

    Read the article

  • Get name of currently executing test in JUnit 4

    - by Dave Ray
    In JUnit 3, I could get the name of the currently running test like this: public class MyTest extends TestCase { public void testSomething() { System.out.println("Current test is " + getName()); ... } } which would print "Current test is testSomething". Is there any out-of-the-box or simple way to do this in JUnit 4? Background: Obviously, I don't want to just print the name of the test. I want to load test-specific data that is stored in a resource with the same name as the test. You know, convention over configuration and all that. Thanks!

    Read the article

  • Why are my RSpec specs running twice?

    - by James A. Rosen
    I have the following RSpec (1.3.0) task defined in my Rakefile: require 'spec/rake/spectask' Spec::Rake::SpecTask.new(:spec) do |spec| spec.libs << 'lib' << 'spec' spec.spec_files = FileList['spec/**/*_spec.rb'] end I have the following in spec/spec_helper.rb: require 'rubygems' require 'spec' require 'spec/autorun' require 'rack/test' require 'webmock/rspec' include Rack::Test::Methods include WebMock require 'omniauth/core' I have a single spec declared in spec/foo/foo_spec.rb: require File.dirname(__FILE__) + '/../spec_helper' describe Foo do describe '#bar' do it 'be bar-like' do Foo.new.bar.should == 'bar' end end end When I run rake spec, the single example runs twice. I can check it by making the example fail, giving me two red "F"s. One thing I thought was that adding spec to the SpecTask's libs was causing them to be double-defined, but removing that doesn't seem to have any effect.

    Read the article

  • Sensible unit test possible?

    - by nkr1pt
    Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists? I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code. I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed. How would you tackle this? public class Extractor { private EventBus eventBus; private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()}; private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{}; private ExtractCommand[] macExtractCommands = new ExtractCommand[]{}; @Inject public Extractor(EventBus eventBus) { this.eventBus = eventBus; } public boolean extract(DownloadCandidate downloadCandidate) { for (ExtractCommand command : getSystemSpecificExtractCommands()) { if (command.extract(downloadCandidate)) { eventBus.fireEvent(this, new ExtractCompletedEvent()); return true; } } eventBus.fireEvent(this, new ExtractFailedEvent()); return false; } private ExtractCommand[] getSystemSpecificExtractCommands() { String os = System.getProperty("os.name"); if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return linuxExtractCommands; } else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return windowsExtractCommands; } else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return macExtractCommands; } return null; } }

    Read the article

  • How to test a struts 2.1.x developer?

    - by Jason Pyeron
    We employ test to filter out those who can't. The tests are designed to be very low effort for those who can and too much effort for those who can't. Here is an example for java web application developer on an Oracle project: We only work with contractors who can use the tools we use, to determine if you can use the tools we have devised some very simple tests. Instructions If you are prepared and knowledgeable this will take you about 2-5 minutes. Suggested knowledge and tools: * subversion 1.6 see http://subversion.tigris.org/ or http://cygwin.com/setup.exe * java 1.6 see http://java.sun.com/javase/downloads/index.jsp * oracle >=10g see http://www.oracle.com/technology/software/products/jdev/index.html * j2ee server see http://tomcat.apache.org/download-55.cgi or http://www.jboss.org/jbossas/downloads/ Steps 1. check out svn://statics32.pdinc.us/home/subversion/guest 2. deploy the war file found at trunk/test.war 3. browse to the web application you installed from the war file and answer the one SQL question: How many rows are in the table 'testdata' where column 'value' ends with either an 'A' or an 'a'? The login credentials are in trunk/doc/oracle.txt 4. make a RESULTS HASH by submitting your answer to the form. 5. create a file in tmp/YourUserName.txt and put the RESULTS HASH in it, not the answer. 6. check in your file (don't forget to add the file first). 7. message me with the revision number of your check in. As such I am looking for ideas on how to test for someone to be a struts 2.1 w/ annotations.

    Read the article

  • Is it a good idea to mock/stub in integration tests?

    - by ez
    Say there are multiple requests in a integration test, some of them are sphinx calls(locator for example). Should we just stub out the entire response of these sphinx call, or, since it is a integration test, we want to excise the entire test without stubbing. If that is the case, how do we still keep test independent in the situation when sphinx fails, no internet connection, or third party server non-responsive. Give reasons. Thanks

    Read the article

  • Returning a complex data type from arguments with Rhino Mocks

    - by Joseph
    I'm trying to set up a stub with Rhino Mocks which returns a value based on what the parameter of the argument that is passed in. Example: //Arrange var car = new Car(); var provider= MockRepository.GenerateStub<IDataProvider>(); provider.Stub( x => x.GetWheelsWithSize(Arg<int>.Is.Anything)) .Return(new List<IWheel> { new Wheel { Size = ?, Make = Make.Michelin }, new Wheel { Size = ?, Make = Make.Firestone } }); car.Provider = provider; //Act car.ReplaceTires(); //Assert that the right tire size was used when replacing the tires The problem is that I want Size to be whatever was passed into the method, because I'm actually asserting later that the wheels are the right size. This is not to prove that the data provider works obviously since I stubbed it, but rather to prove that the correct size was passed in. How can I do this?

    Read the article

  • How can I test ActiveRecord::RecordNotFound in my rails app?

    - by fursie
    Hi, I have this code in my controller and want to test this code line with a functional test. raise ActiveRecord::RecordNotFound if @post.nil? which assert method should I use? I use the built-in rails 2.3.5 test framework. I tried it with this code: test "should return 404 if page doesn't exist." do get :show, :url => ["nothing", "here"] assert_response :missing end but it doesn't work for me. Got this test output: test_should_return_404_if_page_doesn't_exist.(PageControllerTest): ActiveRecord::RecordNotFound: ActiveRecord::RecordNotFound app/controllers/page_controller.rb:7:in `show' /test/functional/page_controller_test.rb:21:in `test_should_return_404_if_page_doesn't_exist.'

    Read the article

  • Difficulty thinking of properties for FsCheck

    - by Benjol
    I've managed to get xUnit working on my little sample assembly. Now I want to see if I can grok FsCheck too. My problem is that I'm stumped when it comes to defining test properties for my functions. Maybe I've just not got a good sample set of functions, but what would be good test properties for these functions, for example? //transforms [1;2;3;4] into [(1,2);(3,4)] pairs : 'a list -> ('a * 'a) list //' //splits list into list of lists when predicate returns // true for adjacent elements splitOn : ('a -> 'a -> bool) -> 'a list -> 'a list list //returns true if snd is bigger sndBigger : ('a * 'a) -> bool (requires comparison)

    Read the article

  • 3 matchers expected, 4 recorded.

    - by user564159
    I get this exception during the mock recording time. Searched for a solution in this forum. Made sure that i did not mess up any another parameter. The below mock expection is giving the error. EasyMock.expect(slotManager.addSlotPageletBinding(EasyMock.isA(String.class), EasyMock.isA(String.class), EasyMock.isA(helloWorld.class))).andReturn(true); before this statement i have another mock expection on the same method with TWO parameter(overloaded method).Below is that mock. EasyMock.expect(slotManager.addSlotPageletBinding(EasyMock.isA(String.class),EasyMock.isA(String.class))).andReturn(true).anyTimes(); Could any one guide me on this. Thanks. java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:56) at org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:48) at org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:40) at org.easymock.internal.RecordState.invoke(RecordState.java:76) at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:38) at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:72) at org.easymock.classextension.internal.ClassProxyFactory$1.intercept(ClassProxyFactory.java:79) at com.amazon.inca.application.SlotManager$$EnhancerByCGLIB$$3bf5ac02.addSlotPageletBinding() at com.amazon.iris3.apps.Iris3YourAccountApplicationTest.testBuildIncaViewConfiguration(Iris3YourAccountApplicationTest.java:107) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

    Read the article

  • Unit Test json output in Zend Framework

    - by lyle
    The Zend Tutorial lists many assertions to check the output generated by a request. http://framework.zend.com/manual/en/zend.test.phpunit.html But they all seem to assume that the output is html. I need to test json output instead. Are there any assertions helpful to check json, or is there at least a generic way to make assertions against the output? Anything that doesn't rely on the request outputting html?

    Read the article

  • What is the correct way to unit test areas around exceptions

    - by Codek
    Hi, Looking at our code coverage of our unit tests we're quite high. But the last few % is tricky because a lot of them are catching things like database exceptions - which in normal circumstances just dont happen. For example the code prevents fields being too long etc, so the only possible database exceptions are if the DB is broken/down, or if the schema is changed under our feet. So is the only way to Mock the objects such that the exception can be thrown? That seems a little bit pointless. Perhaps it's better to just accept not getting 100% code coverage? Thanks, Dan

    Read the article

  • Getting Bad file descriptor when running Tornado AsyncHTTPTestCase

    - by Will
    When running a test using the Tornado AsyncHTTPTestCase I'm getting a stack trace that isn't related to the test. The test is passing so this is probably happening on the test clean up? I'm using Python 2.7.2, Tornado 2.2. The test code is: class AllServersHandlerTest(AsyncHTTPTestCase): endpoint = AllServersHandler.endpoint # '/rest/test/' def test_server_status_with_advertiser(self): on_new_host(None, '127.0.0.1') response = self.fetch(self.endpoint, method='GET') result = json.loads(response.body, 'utf8').get('data') self.assertEquals(['127.0.0.1'], result) The test passes ok, but I get the following stack trace from the Tornado server. OSError: [Errno 9] Bad file descriptor INFO:root:200 POST /rest/serverStatuses (127.0.0.1) 0.00ms DEBUG:root:error closing fd 688 Traceback (most recent call last): File "C:\Python27\Lib\site-packages\tornado-2.2-py2.7.egg\tornado\ioloop.py", line 173, in close os.close(fd) OSError: [Errno 9] Bad file descriptor Any ideas how to cleanly shutdown the test case?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >