Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 123/550 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • using ruby test and selenium grid how can I keep the same browser window for multiple tests?

    - by George Horlacher
    Each of my tests start a new selenium client browser and tear it down so they can run stand alone with this code: def setup if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver.new("#$sell_server", 4444, "#$browser", "http://#$network.#$host:2086", 10000); @selenium.start end @selenium.set_context("test_login") end def teardown @selenium.stop unless $selenium assert_equal [], @verification_errors end What I'd like is to run a suite of tests that all use the same browser and don't keep opening and closing new browsers for every test. I've tried using $selenium as a global object / browser but each test still opens up a new browser and closes it. How should this be done?

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Selenium RC cannot test on compressed html

    - by JH
    In order to have the fast speed of website, the web sever compress (gzip) the html files before sending to our clients. When running selenium tests, it shows a pop-up saying: You have chosen to open ... which is a: Bin file from: http://... Would you like to save this file? "Cancel" "Save File" It seems that the compressed html file doesn't unzip and browsers recognise it as Binary file.

    Read the article

  • How to test onLowMemory conditions?

    - by Samuh
    I have put some instructions in onLowMemory() callback and want to test the same. Is there a "direct" way to test onLowMemory function of the application subclass? Or will I have to just overload the phone by starting many apps and doing memory intensive tasks? Thanks.

    Read the article

  • Getting Bad file descriptor when running Tornado AsyncHTTPTestCase

    - by Will
    When running a test using the Tornado AsyncHTTPTestCase I'm getting a stack trace that isn't related to the test. The test is passing so this is probably happening on the test clean up? I'm using Python 2.7.2, Tornado 2.2. The test code is: class AllServersHandlerTest(AsyncHTTPTestCase): endpoint = AllServersHandler.endpoint # '/rest/test/' def test_server_status_with_advertiser(self): on_new_host(None, '127.0.0.1') response = self.fetch(self.endpoint, method='GET') result = json.loads(response.body, 'utf8').get('data') self.assertEquals(['127.0.0.1'], result) The test passes ok, but I get the following stack trace from the Tornado server. OSError: [Errno 9] Bad file descriptor INFO:root:200 POST /rest/serverStatuses (127.0.0.1) 0.00ms DEBUG:root:error closing fd 688 Traceback (most recent call last): File "C:\Python27\Lib\site-packages\tornado-2.2-py2.7.egg\tornado\ioloop.py", line 173, in close os.close(fd) OSError: [Errno 9] Bad file descriptor Any ideas how to cleanly shutdown the test case?

    Read the article

  • How do I write a spec to verify the rendering of partials?

    - by TheDeeno
    I'm using rr and rspec. Also, I'm using the collection short hand for partial rendering. My question: How do I correctly fill out the the following spec? before(:each) do assigns[:models] = Array.new(10, stub(Model)) end it "should render the 'listing' partial for each model" do # help me write something that actually verifies this end I've tried a few examples from the rspec book, rspec docs, and rr docs. Everything I try seems to leave me with runtime errors in the test - not failed assertions. Rather than show all the transformations I've tried, I figured all I'd need if someone showed me one that actually worked. I'd be good to go from there.

    Read the article

  • Rails - How do you test ActionMailer sent a specific email in tests

    - by adam
    Currently in my tests I do something like this to test if an email is queued to be sent assert_difference('ActionMailer::Base.deliveries.size', 1) do get :create_from_spreedly, {:user_id => @logged_in_user.id} end but if i a controller action can send two different emails i.e. one to the user if sign up goes fine or a notification to admin if something went wrong - how can i test which one actually got sent. The code above would pass regardless.

    Read the article

  • documenting black-box test cases

    - by Blux
    Hi everyone, I want to write an initial (black box) test cases for one of my university projects. I haven't started coding yet, I'm still in completing the SRS document and i should specify the test cases i'm going to implement after the coding. The project is web based, and i should follow this template in each test case: +++++ Test case ID: Author: Initial state: Preconditions: Use Case: Test input: Expected output: ++++++ The thing is, i don't know what is the difference between "initial state" and "preconditions". In some of the test cases it's hard to differentiate between them. Like in "Edit Page" what should be the initial state and what should be the preconditions? any help will appreciated.=)

    Read the article

  • How to make sure web services are kept stable from one release to the next?

    - by Tor Hovland
    The company where I work is a software vendor with a suite of applications. There are also a number of web services, and of course they have to be kept stable even if the applications change. We haven't always succeeded with this, and sometimes a customer finds that a service is not behaving as before after upgrading. We now want to handle this better. In general, web services shouldn't change, and if they have to, at least we will know about it and document the change. But how do we ensure this? One idea is to compare the WSDL files with the previous versions at every release. That will make sure the interfaces don't change, but it won't detect that the behavior changes, for example if a bug is introduced in some common library. Another idea is to build up a suite of service tests, for example using soapUI. But then we'll never know if we have covered enough cases. What are some best practices regarding this?

    Read the article

  • "autotest/rails [...] doesn't [...] exist. Aborting"

    - by Ethan
    I'm finding that autotest has stopped working... $ autotest loading autotest/rails Autotest style autotest/rails doesn't seem to exist. Aborting. According to this blog post, the common reason for this error is that people don't have the autotest-rails gem installed. However, I definitely have that installed: autotest-rails (4.1.0) ZenTest (4.1.4, 4.1.3, 4.1.1, 4.0.0, 3.11.1, 3.11.0, 3.10.0, 3.9.3, 3.9.2) I haven't installed any new gems today or yesterday, though I might have done a gem update yesterday. Another issue I saw mentioned was incompatibility with Ruby 1.9, but I'm using MRI Ruby 1.8.6.

    Read the article

  • How to write test cases for drawing text / string in a box ?

    - by Madhup
    Hi, I am drawing strings in a rectangular frame. The string is drawing perfectly. Now I need to write test cases using sentesting kit. I have no ideas from where I should start. For help I have also seen the iPhone sample calculator application But still out of sorts. Any body having ideas please help. Thanks, Madhup

    Read the article

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Why is my Extension Method not showing up in my test class?

    - by Robert Harvey
    I created an extension method called HasContentPermission on the System.Security.Principal.IIdentity interface: namespace System.Security.Principal { public static bool HasContentPermission (this IIdentity itentity, int contentID) { // I do stuff here return result; } } And I call it like this: bool hasPermission = User.Identity.HasPermission(contentID); Works like a charm. Now I want to unit test it. To do that, all I really need to do is call the extension method directly, so: using System.Security.Principal; namespace MyUnitTests { [TestMethod] public void HasContentPermission_PermissionRecordExists_ReturnsTrue() { IIdentity identity; bool result = identity.HasContentPermission(... But HasContentPermission won't intellisense. I tried creating a stub class that inherits from IIdentity, but that didn't work either. Why? Or am I going about this the wrong way?

    Read the article

  • Are TestContext.Properties read only ?

    - by DBJDBJ
    Using Visual Studio generate Test Unit class. Then comment out class initialization method. Inside it add your property, using the testContext argument. //Use ClassInitialize to run code before running the first test in the class [ClassInitialize()] public static void MyClassInitialize(TestContext testContext) { /* * Any user defined testContext.Properties * added here will be erased upon this method exit */ testContext.Properties.Add("key", 1 ) ; // above works but is lost } After leaving MyClassInitialize, properties defined by user are lost. Only the 10 "official" ones are left. This effectively means TestContext.Properties is read only, for users. Which is not clearly documented in MSDN. Please discuss. --DBJ

    Read the article

  • Is JPA persistence.xml classpath located?

    - by Vinnie
    Here's what I'm trying to do. I'm using JPA persistence in a web application, but I have a set of unit tests that I want to run outside of a container. I have my primary persistence.xml in the META_INF folder of my main app and it works great in the container (Glassfish). I placed a second persistence.xml in the META-INF folder of my test-classes directory. This contains a separate persistence unit that I want to use for test only. In eclipse, I placed this folder higher in the classpath than the default folder and it seems to work. Now when I run the maven build directly from the command line and it attempts to run the unit tests, the persistence.xml override is ignored. I can see the override in the META-INF folder of the maven generated test-classes directory and I expected the maven tests to use this file, but it isn't. My Spring test configuration overrides, achieved in a similar fashion are working. I'm confused at to whether the persistence.xml is located through the classpath. If it were, my override should work like the spring override since the maven surefire plugin explains "[The test class directory] will be included at the beginning the test classpath". Did I wrongly anticipate how the persistence.xml file is located? I could (and have) create a second persistence unit in the production persistence.xml file, but it feels dirty to place test configuration into this production file. Any other ideas on how to achieve my goal is welcome.

    Read the article

  • How to unit test business rules?

    - by Robert Lamb
    I need a unit test to make sure I am accumulating vacation hours properly. But vacation hours accumulate according to a business rule, if the rule changes, then the unit test breaks. Is this acceptable? Should I expose the rule through a method and then call that method from both my code and my test to ensure that the unit test isn't so fragile? My question is: What is the right way to unit test business rules that may change?

    Read the article

  • Is it possible to access a running instance of an app using JNA/JNI?

    - by Carlos Blanco
    I'm writing a test engine for a Java application that has some of the code written in C. This application uses JNI to access it's native part. In the engine I'm writing, I use Fest to control de UI and perform the tests. However, I,m blind when dealing with the part that is written in C. I wonder if I can use JNA or JNI to access the native part of the app. I believe that the fact that the application is already running is huge issue here.

    Read the article

  • C# why datetime cannot compare?

    - by 5YrsLaterDBA
    my C# unit test has the following statement: Assert.AreEqual(logoutTime, log.First().Timestamp); Why it is failed with following information: Assert.AreEqual failed. Expected:<4/28/2010 2:30:37 PM>. Actual:<4/28/2010 2:30:37 PM>. Are they not the same?

    Read the article

  • Python unittest (using SQLAlchemy) does not write/update database?

    - by Jerry
    Hi, I am puzzled at why my Python unittest runs perfectly fine without actually updating the database. I can even see the SQL statements from SQLAlchemy and step through the newly created user object's email -- ...INFO sqlalchemy.engine.base.Engine.0x...954c INSERT INTO users (user_id, user_name, email, ...) VALUES (%(user_id)s, %(user_name)s, %(email)s, ...) ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_id': u'4cfdafe3f46544e1b4ad0c7fccdbe24a', 'email': u'[email protected]', ...} > .../tests/unit_tests/test_signup.py(127)test_signup_success() -> user = user_q.filter_by(user_name='test').first() (Pdb) n ...INFO sqlalchemy.engine.base.Engine.0x...954c SELECT users.user_id AS users_user_id, ... FROM users WHERE users.user_name = %(user_name_1)s LIMIT 1 OFFSET 0 ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_name_1': 'test'} > .../tests/unit_tests/test_signup.py(128)test_signup_success() -> self.assertTrue(isinstance(user, model.User)) (Pdb) user <pweb.models.User object at 0x9c95b0c> (Pdb) user.email u'[email protected]' Yet at the same time when I login to the test database, I do not see the new record there. Is it some feature from Python/unittest/SQLAlchemy/Pyramid/PostgreSQL that I'm totally unaware of? Thanks. Jerry

    Read the article

  • Are there cross-platform tools to write XSS attacks directly to the database?

    - by Joachim Sauer
    I've recently found this blog entry on a tool that writes XSS attacks directly to the database. It looks like a terribly good way to scan an application for weaknesses in my applications. I've tried to run it on Mono, since my development platform is Linux. Unfortunately it crashes with a System.ArgumentNullException deep inside Microsoft.Practices.EnterpriseLibrary and I seem to be unable to find sufficient information about the software (it seems to be a single-shot project, with no homepage and no further development). Is anyone aware of a similar tool? Preferably it should be: cross-platform (Java, Python, .NET/Mono, even cross-platform C is ok) open source (I really like being able to audit my security tools) able to talk to a wide range of DB products (the big ones are most important: MySQL, Oracle, SQL Server, ...)

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >