Search Results

Search found 14797 results on 592 pages for 'gui testing'.

Page 159/592 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Why are my RSpec specs running twice?

    - by James A. Rosen
    I have the following RSpec (1.3.0) task defined in my Rakefile: require 'spec/rake/spectask' Spec::Rake::SpecTask.new(:spec) do |spec| spec.libs << 'lib' << 'spec' spec.spec_files = FileList['spec/**/*_spec.rb'] end I have the following in spec/spec_helper.rb: require 'rubygems' require 'spec' require 'spec/autorun' require 'rack/test' require 'webmock/rspec' include Rack::Test::Methods include WebMock require 'omniauth/core' I have a single spec declared in spec/foo/foo_spec.rb: require File.dirname(__FILE__) + '/../spec_helper' describe Foo do describe '#bar' do it 'be bar-like' do Foo.new.bar.should == 'bar' end end end When I run rake spec, the single example runs twice. I can check it by making the example fail, giving me two red "F"s. One thing I thought was that adding spec to the SpecTask's libs was causing them to be double-defined, but removing that doesn't seem to have any effect.

    Read the article

  • Junit and EasyMock understanding clarifications

    - by harigm
    Still Now I am using JUnit, I came across EasyMock, I am understanding both are for the same purpose. Is my understanding correct? What are the advantages does EasyMock has over the Junit? Which one is easier to configure? Does EasyMock has any limitations? Please help me to learn

    Read the article

  • DSL to generate test data

    - by queen3
    There're several ways to generate data for tests (not only unit tests), for example, Object Mother, builders, etc. Another useful approach is to write test data as plain text: product: Main; prices: 145, 255; Expire: 10-Apr-2011; qty: 2; includes: Sub product: Sub; prices: 145, 255; Expire: 10-Apr-2011; qty: 2 and then parse it into C# objects. This is easy to use in unit tests (because deep inner collections can be written in single line), this is even more convenient to use in FitNesse-like system (because this DSL naturally fits into wiki), and so on. So I use this and write parser, but it's tedious to write each time. I'm not a big expert in DSL/language parsers, but I think they can help here. What would be the right one to use? I only heard about: DSL (I mean, any DSL) Boo (that I think can do DSL) ANTLR but I don't even know which one to pick and where to start. So the question: is it reasonable to use some kind of DSL to generate test data? What would you suggest to do so? Are there any existing cases?

    Read the article

  • Comparing two objects that are the same in MbUnit

    - by Coppermill
    From MBUnit I am trying to check if the values of two objects are the same using Assert.AreSame(RawDataRow, result); However I am getting the following fail: ====================== Expected Value & Actual Value : {RawDataRow: CentreID = "CentreID1", CentreLearnerRef = "CentreLearnerRef1", ContactID = 1, DOB = 2010-05-05T00:00:00.0000000, Email = "Email1", ErrorCodes = "ErrorCodes1", ErrorDescription = "ErrorDescription1", FirstName = "FirstName1"} Remark : Both values look the same when formatted but they are distinct instances. ====================== I don't want to have to go through each property, can I do this from MbUnit

    Read the article

  • Action works, but test doesn't (Shoulda)

    - by trobrock
    I am trying to test my update action in Rails with this: context "on PUT to :update" do setup do @countdown = Factory(:countdown) @new_countdown = Factory.stub(:countdown) put :update, :id => @countdown.id, :name => @new_countdown.name, :end => @new_countdown.end end should_respond_with :redirect should_redirect_to("the countdowns view") { countdown_url(assigns(:countdown)) } should_assign_to :countdown should_set_the_flash_to /updated/i should "save :countdown with new attributes" do @countdown = Countdown.find(@countdown.id) assert_equal @new_countdown.name, @countdown.name assert_equal 0, (@new_countdown.end - @countdown.end).to_i end end When I actually go through the updating process using the scaffold that was built it updates the record fine, but the tests give me this error: 1) Failure: test: on PUT to :update should save :countdown with new attributes. (CountdownsControllerTest) [/test/functional/countdowns_controller_test.rb:86:in `__bind_1276353837_121269' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: on PUT to :update should save :countdown with new attributes. ']: <"Countdown 8"> expected but was <"Countdown 7">.

    Read the article

  • Setting up a web developer lab for learning purposes

    - by Saleh Al-Abbas
    I'm not a developer by profession. Therefore, I'm not exposed to real world technical problems that face professional developers. I read/heard about web farms, integration between different systems, load balancing ... etc. Therefore, I was wondering if there are ways for the individual developer to create an environment that simulates real world situations with minimal number of machines like: web farms & caching simulating many users accessing your website (Pressure tests?) Performance load balancing anything you think I should consider. By the way, I have a server machine and 1 PC. and I don't mind investing in tools and software. PS. I'm using Microsoft technologies for development but I hope this is not a limiting factor. Thanks

    Read the article

  • Website stress test in Python - Django

    - by RadiantHex
    Hi folks, I'm trying to build a small stress test script to test how quickly a set of requests gets done. Need to measure speed for 100 requests. Problem is that I wouldn't know how to implement it, as it would require parallel url requests to be called. Any ideas?

    Read the article

  • What is the correct way to unit test areas around exceptions

    - by Codek
    Hi, Looking at our code coverage of our unit tests we're quite high. But the last few % is tricky because a lot of them are catching things like database exceptions - which in normal circumstances just dont happen. For example the code prevents fields being too long etc, so the only possible database exceptions are if the DB is broken/down, or if the schema is changed under our feet. So is the only way to Mock the objects such that the exception can be thrown? That seems a little bit pointless. Perhaps it's better to just accept not getting 100% code coverage? Thanks, Dan

    Read the article

  • Django formset unit test

    - by Py
    I can't running Unit Test with formset. I try to do a test: class NewClientTestCase(TestCase): def setUp(self): self.c = Client() def test_0_create_individual_with_same_adress(self): post_data = { 'ctype': User.CONTACT_INDIVIDUAL, 'username': 'dupond.f', 'email': '[email protected]', 'password': 'pwd', 'password2': 'pwd', 'civility': User.CIVILITY_MISTER, 'first_name': 'François', 'last_name': 'DUPOND', 'phone': '+33 1 34 12 52 30', 'gsm': '+33 6 34 12 52 30', 'fax': '+33 1 34 12 52 30', 'form-0-address1': '33 avenue Gambetta', 'form-0-address2': 'apt 50', 'form-0-zip_code': '75020', 'form-0-city': 'Paris', 'form-0-country': 'FRA', 'same_for_billing': True, } response = self.c.post(reverse('client:full_account'), post_data, follow=True) self.assertRedirects(response, '%s?created=1' % reverse('client:dashboard')) and i have this error: ValidationError: [u'ManagementForm data is missing or has been tampered with'] My view : def full_account(request, url_redirect=''): from forms import NewUserFullForm, AddressForm, BaseArticleFormSet fields_required = [] fields_notrequired = [] AddressFormSet = formset_factory(AddressForm, extra=2, formset=BaseArticleFormSet) if request.method == 'POST': form = NewUserFullForm(request.POST) objforms = AddressFormSet(request.POST) if objforms.is_valid() and form.is_valid(): user = form.save() address = objforms.forms[0].save() if url_redirect=='': url_redirect = '%s?created=1' % reverse('client:dashboard') logon(request, form.instance) return HttpResponseRedirect(url_redirect) else: form = NewUserFullForm() objforms = AddressFormSet() return direct_to_template(request, 'clients/full_account.html', { 'form':form, 'formset': objforms, 'tld_fr':False, }) and my form file : class BaseArticleFormSet(BaseFormSet): def clean(self): msg_err = _('Ce champ est obligatoire.') non_errors = True if 'same_for_billing' in self.data and self.data['same_for_billing'] == 'on': same_for_billing = True else: same_for_billing = False for i in [0, 1]: form = self.forms[i] for field in form.fields: name_field = 'form-%d-%s' % (i, field ) value_field = self.data[name_field].strip() if i == 0 and self.forms[0].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False elif i == 1 and not same_for_billing and self.forms[1].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False return non_errors class AddressForm(forms.ModelForm): class Meta: model = Address address1 = forms.CharField() address2 = forms.CharField(required=False) zip_code = forms.CharField() city = forms.CharField() country = forms.ChoiceField(choices=CountryField.COUNTRIES, initial='FRA')

    Read the article

  • Intelligent serial port mocks with Moq

    - by Padu Merloti
    I have to write a lot of code that deals with serial ports. Usually there will be a device connected at the other end of the wire and I usually create my own mocks to simulate their behavior. I'm starting to look at Moq to help with my unit tests. It's pretty simple to use it when you need just a stub, but I want to know if it is possible and if yes how do I create a mock for a hardware device that responds differently according to what I want to test. A simple example: One of the devices I interface with receives a command (move to position x), gives back an ACK message and goes to a "moving" state until it reaches the ordered position. I want to create a test where I send the move command and then keep querying state until it reaches the final position. I want to create two versions of the mock for two different tests, one where I expect the device to reach the final position successfully and the other where it will fail. Too much to ask?

    Read the article

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • Perl unit test - start a tcp server & continue

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the tcp server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call "make test" after "perl Makefile.PL", this test starts & I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the "&" at the end to force it to start in background & then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • How to test onLowMemory conditions?

    - by Samuh
    I have put some instructions in onLowMemory() callback and want to test the same. Is there a "direct" way to test onLowMemory function of the application subclass? Or will I have to just overload the phone by starting many apps and doing memory intensive tasks? Thanks.

    Read the article

  • Creating a context in custom shoulda macro does not work.

    - by Honza
    I have a custom should macro in my test_helper.rb which looks like this. def self.should_require_login(actions = [:index]) if (actions.is_a? Symbol) actions = [actions] end context "without user" do actions.each do |action| should "redirect #{action.to_s} away" do get action assert_redirected_to login_path end end end if block_given? context "active user logged in" do setup do @user = Factory.create(:user) @user.register! @user.activate! login_as(@user) end yield end end end I would like to use it like this: should_require_login(:protected_action) do should "do something" do ... end end And I am expecting the "do something" test to run in the "active user logged in" context, but the test executes in the top context, like the "active user logged in" context never existed and I fail to see the reason why.

    Read the article

  • What's your release process for your commercial application?

    - by dr. evil
    If you are developing a commercial desktop application, what's your release process? Sample process: Develop it: Patch bugs, add features, etc. Feature Freeze (do not fix, add anything unless it's absolutely required) Test it If everything is OK release it, if it's not fix it, test it, release it I think the most crucial question is what's your approach to "feature freeze test release" cycle? Or do you test it more frequently that you don't need such a cycle and your software is always ready for public release?

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Practical refactoring using unit tests

    - by awhite
    Having just read the first four chapters of Refactoring: Improving the Design of Existing Code, I embarked on my first refactoring and almost immediately came to a roadblock. It stems from the requirement that before you begin refactoring, you should put unit tests around the legacy code. That allows you to be sure your refactoring didn't change what the original code did (only how it did it). So my first question is this: how do I unit-test a method in legacy code? How can I put a unit test around a 500 line (if I'm lucky) method that doesn't do just one task? It seems to me that I would have to refactor my legacy code just to make it unit-testable. Does anyone have any experience refactoring using unit tests? And, if so, do you have any practical examples you can share with me? My second question is somewhat hard to explain. Here's an example: I want to refactor a legacy method that populates an object from a database record. Wouldn't I have to write a unit test that compares an object retrieved using the old method, with an object retrieved using my refactored method? Otherwise, how would I know that my refactored method produces the same results as the old method? If that is true, then how long do I leave the old deprecated method in the source code? Do I just whack it after I test a few different records? Or, do I need to keep it around for a while in case I encounter a bug in my refactored code? Lastly, since a couple people have asked...the legacy code was originally written in VB6 and then ported to VB.NET with minimal architecture changes.

    Read the article

  • documenting black-box test cases

    - by Blux
    Hi everyone, I want to write an initial (black box) test cases for one of my university projects. I haven't started coding yet, I'm still in completing the SRS document and i should specify the test cases i'm going to implement after the coding. The project is web based, and i should follow this template in each test case: +++++ Test case ID: Author: Initial state: Preconditions: Use Case: Test input: Expected output: ++++++ The thing is, i don't know what is the difference between "initial state" and "preconditions". In some of the test cases it's hard to differentiate between them. Like in "Edit Page" what should be the initial state and what should be the preconditions? any help will appreciated.=)

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • "autotest/rails [...] doesn't [...] exist. Aborting"

    - by Ethan
    I'm finding that autotest has stopped working... $ autotest loading autotest/rails Autotest style autotest/rails doesn't seem to exist. Aborting. According to this blog post, the common reason for this error is that people don't have the autotest-rails gem installed. However, I definitely have that installed: autotest-rails (4.1.0) ZenTest (4.1.4, 4.1.3, 4.1.1, 4.0.0, 3.11.1, 3.11.0, 3.10.0, 3.9.3, 3.9.2) I haven't installed any new gems today or yesterday, though I might have done a gem update yesterday. Another issue I saw mentioned was incompatibility with Ruby 1.9, but I'm using MRI Ruby 1.8.6.

    Read the article

  • Rails "rake test" crashing

    - by Homer J. Simpson
    Hi, this might be rather unspecific, but I'm trying to do 'rake test' on a new rails app, and end up with (in /Users/myname/dev/railstest/RailsApplication1) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I"lib:test" "/Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I"lib:test" "/Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I"lib:test" "/Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" No other output. System is Leopard 10.5, Rails 2.3.5, Ruby 1.86 Any ideas ?

    Read the article

  • C# why datetime cannot compare?

    - by 5YrsLaterDBA
    my C# unit test has the following statement: Assert.AreEqual(logoutTime, log.First().Timestamp); Why it is failed with following information: Assert.AreEqual failed. Expected:<4/28/2010 2:30:37 PM>. Actual:<4/28/2010 2:30:37 PM>. Are they not the same?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >