Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 131/550 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Getting notification / listener when action is performed (chimpChat / monkeyrunner tool)

    - by Dr. AtZe
    I want to get a notification if someone has performed an action in an android app from outside of the app. I don't want to make any (android) code changes. To do the actions I use the Chimpchat.jar, the .jar that the monkeyrunner tool uses. To be clear: Can I get a notification or register listeners on components from outside of the app? e.g. Run my android app My java application links into the device with chimpChat via the adb The user touches a button My java application gets a notification what was performed = am I able to get that information? If not, am I able to get the information on which position the tab was? Hopefully it's clear what I want to do. Thanks, soeren

    Read the article

  • How do I test how customers use my Cocoa application?

    - by John Gallagher
    I'm interested in finding out how customers use features in my Cocoa application. I want to build up statistics on which features people use and how they use them, so that I can measure the value of features I'm implementing. This feedback of course will be off by default and anonymous. Does anyone know of any frameworks that have been developed that can achieve this without me having to write stuff from scratch?

    Read the article

  • JUnit terminates child threads

    - by Marco
    Hi to all, When i test the execution of a method that creates a child thread, the JUnit test ends before the child thread and kills it. How do i force JUnit to wait for the child thread to complete its execution? Thanks

    Read the article

  • Integrating Hudson with MS Test?

    - by hangy
    Is it possible to integrate Hudson with MS Test? I am setting up a smaller CI server on my development machine with Hudson right now, just so that I can have some statistics (ie. FxCop and compiler warnings). Of course, it would also be nice if it could just run my unit tests and present their output. Up to now, I have added the following batch task to Hudson, which makes it run the tests properly. "%PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" /runconfig:LocalTestRun.testrunconfig /testcontainer:Tests\bin\Debug\Tests.dll However, as far as I know, Hudson does not support analysis of MS Test results, yet. Does anyone know whether the TRX files generated by MSTest.exe can be transformed to the JUnit or NUnit result format (because those are supported by Hudson), or whether there is any other way to integrate MS Test unit tests with Hudson?

    Read the article

  • Unit Test Sessions Window Closes when debugging

    - by Daniel Dyson
    When I select an NUnit test in the Unit Test Sessions window and click debug, the window disappears. My breakpoints are hit, but if I hit F5, the Unit Test Sessions window does not return until the test returns a result or I stop the debugging session. This is preventing me from viewing any console output during tests. Any ideas?

    Read the article

  • Multiple asserts in single test?

    - by Gern Blandston
    Let's say I want to write a function that validates an email address with a regex. I write a little test to check my function and write the actual function. Make it pass. However, I can come up with a bunch of different ways to test the same function ([email protected]; [email protected]; test.test.com, etc). Do I put all the incantations that I need to check in the same, single test with several ASSERTS or do I write a new test for every single thing I can think of? Thanks!

    Read the article

  • From a shell script open a new tab in a specific instance of Firefox.

    - by toc777
    Hi everyone, I have a shell script that creates Firefox profiles and then uses them to open multiple instances of Firefox simultaneously. The problem is how can I open a URL in a particular instance of Firefox? I have tried firefox -CREATEPROFILE test firefox -P 'test' -no-remote firefox -P test -url www.google.ie But the last part which is trying to open the URL using the test profile does not work, it always opens in then default profile. Is there any way to tell Firefox from the command line to open a URL using a particular profile? Thanks.

    Read the article

  • Convert C# unit test names to English (testdox style)

    - by Igor Zevaka
    I have a whole bunch of unit tests written in MbUnit and I would like to generate plain English sentences from test names. The concept is introduced here: http://dannorth.net/introducing-bdd This is from the article: public class CustomerLookupTest extends TestCase { testFindsCustomerById() { ... } testFailsForDuplicateCustomers() { ... } ... } renders something like this: CustomerLookup - finds customer by id - fails for duplicate customers - ... Unfortunately the tool quoted in the above article (testdox) is Java based. Is there one for .NET? Sounds like this would be something pretty simple to write, but I simply don't have the bandwidth and want to use something already written.

    Read the article

  • python mock patch : a method of instance is called?

    - by JuanPablo
    In python 2.7, I have this function from slacker import Slacker def post_message(token, channel, message): channel = '#{}'.format(channel) slack = Slacker(token) slack.chat.post_message(channel, message) with mock and patch, I can check that the token is used in Slacker class import unittest from mock import patch from slacker_cli import post_message class TestMessage(unittest.TestCase): @patch('slacker_cli.Slacker') def test_post_message_use_token(self, mock_slacker): token = 'aaa' channel = 'channel_name' message = 'message string' post_message(token, channel, message) mock_slacker.assert_called_with(token) how I can check the string use in post_message ? I try with mock_slacker.chat.post_message.assert_called_with('#channel') but I get AssertionError: Expected call: post_message('#channel') Not called

    Read the article

  • Whats are basic things you should kept in mind while writing functional tests?

    - by piemesons
    Hello, what kind of functionality should be covered in functional test? How to prioritize what are the functionality that must be covered in functional test. I can understand it depends upon on the project, whats the functionality present the project. Lets take a example of stackoverflow. Suppose we are developing a basic working model of this. What kind of functional test must be covered in this. Just brief what the key points while writing functional tests. (taking any basic functionality from this site just to understand the reference) Still its platform independent question.I am using ruby on rails developer, but keeping ruby on rails as mind will be preferable.

    Read the article

  • How to flush coverage data when my test cause app crash - For ios app

    - by Ypy
    I want to get the code coverage of my tests. So I set the settings, build an app with .gcno files and run it on simulator. It can get the coverage data successfully if there is no crash issue. But if the app crashed, I will get nothing. So how can I get the code coverage data when the app crash? In my thought, this is because it will not call __gcov_flush() method when app crash. I only add app does not run in background to my plist file, so __gcov_flush() is called only at the time I press Home button. Is there any way to call __gcov_flush() before the app crash?

    Read the article

  • Git: Run through a filter before commiting/pushing?

    - by martiert
    Hi. Is there a way to run the changed files through a filter before doing the commit? I wish to make sure the files follows the coding standards for the project. I would also like to compile and run some test before the commit/push actually takes place, so I know everything in the repo actually works.

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • Does anyone know what causes this error? VC++ with VisualAssert

    - by TerryJohnson
    Hi does anyone know what causes this error? In Visual Studio 2008 with Visual Assert Thanks 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\Microsoft Visual Studio 9.0\VC\include\xlocnum(135) : error C2857: '#include' statement specified with the /Ycstdafx.h command-line option was not found in the source file 1>Build log was saved at "file://c:\Users\Admin1\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Default Values Specflow Step Definitions

    - by Gavin Osborn
    I'm starting out in the world of SpecFlow and I have come across my first problem. In terms of keeping my code DRY I'd like to do the following: Have two scenarios: Given I am on a product page And myfield equals todays date Then... Given I am on a product page And myfield equals todays date plus 4 days Then... I was hoping to use the following Step Definition to cover both variants of my And clause: [Given(@"myfield equals todays date(?: (plus|minus) (\d+) days)?")] public void MyfieldEqualsTodaysDate(string direction, int? days) { //do stuff } However I keep getting exceptions when SpecFlow tries to parse the int? param. I've checked the regular expression and it definitely parses the scenario as expected. I'm aware that I could so something as crude as method overloading etc, I was just wondering if SpecFlow supported the idea of default parameter values, or indeed another way to achieve the same effect. Many Thanks

    Read the article

  • Element not found blocks execution in Selenium

    - by Mariano
    In my test, I try to verify if certain text exists (after an action) using find_element_by_xpath. If I use the right expression and my test pass, the routine ends correctly in no time. However if I try a wrong text (meaning that the test will fail) it hangs forever and I have to kill the script otherwise it does not end. Here is my test (the expression Thx user, client or password you entered is incorrect does not exist in the system, no matter what the user does): # -*- coding: utf-8 -*- import gettext import unittest from selenium import webdriver class TestWrongLogin(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.get("http://10.23.1.104:8888/") # let's check the language try: self.lang = self.driver.execute_script("return navigator.language;") self.lang = self.lang("-")[0] except: self.lang = "en" language = gettext.translation('app', '/app/locale', [self.lang], fallback=True) language.install() self._ = gettext.gettext def tearDown(self): self.driver.quit() def test_wrong_client(self): # test wrong client inputElement = self.driver.find_element_by_name("login") inputElement.send_keys("root") inputElement = self.driver.find_element_by_name("client") inputElement.send_keys("Unleash") inputElement = self.driver.find_element_by_name("password") inputElement.send_keys("qwerty") self.driver.find_element_by_name("form.submitted").click() # wait for the db answer self.driver.implicitly_wait(10) ret = self.driver.find_element_by_xpath( "//*[contains(.,'{0}')]".\ format(self._(u"Thx user, client or password you entered is incorrect"))) self.assertTrue(isinstance(ret, webdriver.remote.webelement.WebElement)) if __name__ == '__main__': unittest.main() Why does it do that and how can I prevent it?

    Read the article

  • Python: How to run unittest.main() for all source files in a subdirectory?

    - by Pete
    I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure: dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: if __name__ == "__main__": unittest.main() And run Python on either source, i.e. $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call execfile for every *.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.

    Read the article

  • How to test a site rigorously?

    - by Sarfraz
    Hello, I recently created a big portal site. It's time for putting it to test. How do you guys test a site rigorously? What are the ways and tools for that? Can we sort of mimic hundreds of virtual users visiting the site to see its load handling? The test should be for both security and speed Thanks in advance.

    Read the article

  • where to put the unittest for library in rails

    - by lidaobing
    Hello, I am a ruby and rails newbie. And I am working on a rails application with RadRails. RadRails has a "Switch to Test" function for my controller, model, etc. but not for my library. if I have class Foo::Bar in /lib/foo/bar.rb, where should I put the unittest for it? or should I separate the foo library in a separated project? Thanks.

    Read the article

  • is there a less bloated way to test constraints in grails?

    - by egervari
    Is there a less bloated way to test constraints? It seems to me that this is too much code to test constraints. class BlogPostTests extends GrailsUnitTestCase { protected void setUp() { super.setUp() mockDomain BlogPost } void testConstraints() { BlogPost blogPost = new BlogPost(title: "", text: "") assertFalse blogPost.validate() assertEquals 2, blogPost.errors.getErrorCount() assertEquals "blank", blogPost.errors.getFieldError("title").getCode() assertEquals "blank", blogPost.errors.getFieldError("text").getCode() blogPost = new BlogPost(title: "title", text: ObjectMother.bigText(2001)) assertFalse blogPost.validate() assertEquals 1, blogPost.errors.getErrorCount() assertEquals "maxSize.exceeded", blogPost.errors.getFieldError("text").getCode() } }

    Read the article

  • mocking static method call to c# library class

    - by Joe
    This seems like an easy enough issue but I can't seem to find the keywords to effect my searches. I'm trying to unit test by mocking out all objects within this method call. I am able to do so to all of my own creations except for this one: public void MyFunc(MyVarClass myVar) { Image picture; ... picture = Image.FromStream(new MemoryStream(myVar.ImageStream)); ... } FromStream is a static call from the Image class (part of c#). So how can I refactor my code to mock this out because I really don't want to provide a image stream to the unit test.

    Read the article

  • Is there a library available which easily can record and replay results of API calls?

    - by Billy ONeal
    I'm working on writing various things that call relatively complicated Win32 API functions. Here's an example: //Encapsulates calling NtQuerySystemInformation buffer management. WindowsApi::AutoArray NtDll::NtQuerySystemInformation( SystemInformationClass toGet ) const { AutoArray result; ULONG allocationSize = 1024; ULONG previousSize; NTSTATUS errorCheck; do { previousSize = allocationSize; result.Allocate(allocationSize); errorCheck = WinQuerySystemInformation(toGet, result.GetAs<void>(), allocationSize, &allocationSize); if (allocationSize <= previousSize) allocationSize = previousSize * 2; } while (errorCheck == 0xC0000004L); if (errorCheck != 0) { THROW_MANUAL_WINDOWS_ERROR(WinRtlNtStatusToDosError(errorCheck)); } return result; } //Client of the above. ProcessSnapshot::ProcessSnapshot() { using Dll::NtDll; NtDll ntdll; AutoArray systemInfoBuffer = ntdll.NtQuerySystemInformation( NtDll::SystemProcessInformation); BYTE * currentPtr = systemInfoBuffer.GetAs<BYTE>(); //Loop through the results, creating Process objects. SYSTEM_PROCESSES * asSysInfo; do { // Loop book keeping asSysInfo = reinterpret_cast<SYSTEM_PROCESSES *>(currentPtr); currentPtr += asSysInfo->NextEntryDelta; //Create the process for the current iteration and fill it with data. std::auto_ptr<ProcImpl> currentProc(ProcFactory( static_cast<unsigned __int32>(asSysInfo->ProcessId), this)); NormalProcess* nptr = dynamic_cast<NormalProcess*>(currentProc.get()); if (nptr) { nptr->SetProcessName(asSysInfo->ProcessName); } // Populate process threads for(ULONG idx = 0; idx < asSysInfo->ThreadCount; ++idx) { SYSTEM_THREADS& sysThread = asSysInfo->Threads[idx]; Thread thread( currentProc.get(), static_cast<unsigned __int32>(sysThread.ClientId.UniqueThread), sysThread.StartAddress); currentProc->AddThread(thread); } processes.push_back(currentProc); } while(asSysInfo->NextEntryDelta != 0); } My problem is in mocking out the NtDll::NtQuerySystemInformation method -- namely, that the data structure returned is complicated (Well, here it's actually relatively simple but it can be complicated), and writing a test which builds the data structure like the API call does can take 5-6 times as long as writing the code that uses the API. What I'd like to do is take a call to the API, and record it somehow, so that I can return that recorded value to the code under test without actually calling the API. The returned structures cannot simply be memcpy'd, because they often contain inner pointers (pointers to other locations in the same buffer). The library in question would need to check for these kinds of things, and be able to restore pointer values to a similar buffer upon replay. (i.e. check each pointer sized value if it could be interpreted as a pointer within the buffer, change that to an offset, and remember to change it back to a pointer on replay -- a false positive rate here is acceptable) Is there anything out there that does anything like this?

    Read the article

  • What's the best way to avoid try...catch...finally... in my unit tests?

    - by Bruce Li
    I'm writing many unit tests in VS 2010 with Microsoft Test. In each test class I have many test methods similar to below: [TestMethod] public void This_is_a_Test() { try { // do some test here // assert } catch (Exception ex) { // test failed, log error message in my log file and make the test fail } finally { // do some cleanup with different parameters } } When each test method looks like this I fell it's kind of ugly. But so far I haven't found a good solution to make my test code more clean, especially the cleanup code in the finally block. Could someone here give me some advices on this? Thanks in advance.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >