Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 94/313 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Getting black images with selenium.captureScreenshot

    - by Lidia
    I'm executing selenium tests with testng, that are started on a remote system with Selenium RC via hudson (with ssh connection). The remote system is windows xp with MKS Toolkit installed, hence ssh. Tests are NOT executed as a windows service. I've tried using both captureScreenshot and captureEntirePageScreenshot methods. The first one always produces a black image. The second one creates the correct screen shot but it only works on Firefox and our tests usually pass on Firefox and fail in other browsers, so it is crucial to capture screen shots for the other browsers (mainly IE and Safari). The tests are ran in parallel, with many browser windows open at the same time. I'm not certain if this is what's causing the problem. Any thoughts will be appreciated.

    Read the article

  • How to test UI interaction of Silverlight dialogs?

    - by Bernard Vander Beken
    I am using Silverlight 3.0 Unit Testing, version Silverlight Toolkit November 2009. Apart from unit tests, it allows to do UI interaction tests, typically using AutomationPeer subclasses (eg ButtonAutomationPeer to interact with a Button). Are there AutomationPeer classes to test the interaction with the following: OpenFileDialog SaveFileDialog MessageBox In unit tests it would be possible to stub these, but for integration and browser testing it would be great to have this testable.

    Read the article

  • How to make automake less ugly?

    - by Brendan Long
    I recently learned how to use automake, and I'm somewhat annoyed that my compile commands went from a bunch of: g++ -O2 -Wall -c fileName.cpp To a bunch of: depbase=`echo src/Unit.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -I. -I./src -g -O2 -MT src/Unit.o -MD -MP -MF $depbase.Tpo -c -o src/Unit.o src/Unit.cpp &&\ mv -f $depbase.Tpo $depbase.Po Is there any way to clean this up? I can usually easily pick out warning messages, but now the wall of text to read though is 3x bigger and much weirder. I know what my flags are, so making it just says "Compiling xxx.cpp" for each file would be perfect.

    Read the article

  • Too Many Public Methods Forced by Test Driven Development

    - by RoryG
    A very specific question from a novice to TDD: I seperate my tests and my app into different packages. Thus, most of my app methods have to be public for tests to access them. As I progress, it becomes obvious that some methods could become private, but if I make that change, the tests that access them won't work. Am I missing a step, or doing something wrong, or is this just one downfall of TDD?

    Read the article

  • How do I get my ActivityUnitTestCases to sync with the MessageQueue thread and call my Handler?

    - by Ricardo Gladwell
    I'm writing unit tests for a ListActivity in Android that uses a handler to update a ListAdapter. While my activity works in the Android emulator, running the same code in a unit test doesn't update my adapter: calls to sendEmptyMessage do not call handleMessage in my activity's Handler. How do I get my ActivityUnitTestCase to sync with the MessageQueue thread and call my Handler? The code for the Activity is somewhat like this: public class SampleActivity extends ListActivity implements SampleListener { List samples = new ArrayList(); public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.sample_list); listView.setEmptyView(findViewById(R.id.empty)); } private final Handler handler = new Handler() { @Override public void handleMessage(Message msg) { // unit test never reaches here sharesAdapter = new SampleAdapter(SampleActivity.this, samples); setListAdapter(sharesAdapter); } }; public void handleSampleUpdate(SampleEvent event) { samples.add(event.getSample()); handler.sendEmptyMessage(0); } } The code for my unit test is somewhat like this: public class SampleActivityTest extends ActivityUnitTestCase<SampleActivity> { public SampleActivityTest() { super(SampleActivity.class); } @MediumTest public void test() throws Exception { final SampleActivity activity = startActivity(new Intent(Intent.ACTION_MAIN), null, null); final ListView listView = (ListView) activity.findViewById(android.R.id.list); activity.handleSampleUpdate(new SampleEvent(this)); // unit test assert fails on this line: assertTrue(listView.getCount() == 1); } }

    Read the article

  • How to distinguish a NY "queens-style" street address from a ranged address, and an address with a u

    - by feroze
    I need to distinguish between a Queens style address, from a valid ranged address, and an address with a unit#. For eg: Queens style: 123-125 Some Street, NY Ranged Address: 6414-6418 37th Ln SE, Olympia, WA 98503 Address with unit#: 1990-A Gildersleeve Ave, Bronx, NY. In the case of #3, A is a unit# at street address 1990. THe unit# might be a number as well, for eg: 1990-12. A ranged address identifies a range of addresses on a street, and not a unique deliverable address. So, the question is, is there an easy way to identify the Queens style address from the other cases?

    Read the article

  • Measurement conversion on the fly

    - by ikadewi
    Hi All I'd like to ask re: measurement conversion on the fly, here's the detail : Requirement: To display unit measurement with consider setting. Concerns: - Only basic (neutral) unit measurement is going to be stored in database, and it is decided one time. The grid control has direct binding to our business object therefore it has complexity to do conversion value. Problem: How to display different unit measurement (follow a setting), consider that controls are bind to business object? Your kind assistance will be appreciated. Thank you ikadewi

    Read the article

  • Opposite of Bloom filter?

    - by abc
    Hi, I'm trying to optimize a piece of software which is basically running millions of tests. These tests are generated in such a way that there can be some repetitions. Of course, I don't want to spend time running tests which I already ran if I can avoid it efficiently. So, I'm thinking about using a Bloom filter to store the tests which have been already ran. However, the Bloom filter errs on the unsafe side for me. It gives false positives. That is, it may report that I've ran a test which I haven't. Although this could be acceptable in the scenario I'm working on, I was wondering if there's an equivalent to a Bloom filter, but erring on the opposite side, that is, only giving false negatives. I've skimmed through the literature without any luck.

    Read the article

  • Why is RSpec so slow under Rails?

    - by Adrian Dunston
    Whenever I run rspec tests for my Rails application it takes forever and a day of overhead before it actually starts running tests. Why is rspec so slow? Is there a way to speed up Rails' initial load or single out the part of my Rails app I need (e.g. ActiveRecord stuff only) so it doesn't load absolutely everything to run a few tests?

    Read the article

  • Easymock vs Mockito: Design vs Maintainability?

    - by RAbraham
    One way of thinking about this is: if we care about the Design of the code then Easymock is the better choice as it gives feedback to you by its concept of expectations If we care about the maintainability of tests( easier to read,write and having less brittle tests which are not affected much by change), then Mockito seems a better choice. My question is: - If you have used Easymock in large scale projects, do you find that your tests are harder to maintain? - What are the limitations of Mockito( other than endo testing)

    Read the article

  • Programatically Gathering NUnit results

    - by skb
    Hi. I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programatically I get the numbers for Total Success/Failed tests?

    Read the article

  • How do I construct a more complex single LINQ to XML query?

    - by Cyberherbalist
    I'm a LINQ newbie, so the following might turn out to be very simple and obvious once it's answered, but I have to admit that the question is kicking my arse. Given this XML: <measuresystems> <measuresystem name="SI" attitude="proud"> <dimension name="mass" dim="M" degree="1"> <unit name="kilogram" symbol="kg"> <factor name="hundredweight" foreignsystem="US" value="45.359237" /> <factor name="hundredweight" foreignsystem="Imperial" value="50.80234544" /> </unit> </dimension> </measuresystem> </measuresystems> I can query for the value of the conversion factor between kilogram and US hundredweight using the following LINQ to XML, but surely there is a way to condense the four successive queries into a single complex query? XElement mss = XElement.Load(fileName); IEnumerable<XElement> ms = from el in mss.Elements("measuresystem") where (string)el.Attribute("name") == "SI" select el; IEnumerable<XElement> dim = from e2 in ms.Elements("dimension") where (string)e2.Attribute("name") == "mass" select e2; IEnumerable<XElement> unit = from e3 in dim.Elements("unit") where (string)e3.Attribute("name") == "kilogram" select e3; IEnumerable<XElement> factor = from e4 in unit.Elements("factor") where (string)e4.Attribute("name") == "pound" && (string)e4.Attribute("foreignsystem") == "US" select e4; foreach (XElement ex in factor) { Console.WriteLine ((string)ex.Attribute("value")); }

    Read the article

  • Grails - Link checking as part of a continuous integration.

    - by Reverend Gonzo
    So, we have a grails app set up with a Hudson CI build process. We're running unit tests, integration tests, and about to set up Selenium for some functional tests as well. However, are there any good ways of fully testing a sites links to make sure nothing has broken in a release. I know there's link checkers in general, but I'd like to have it be a part of the build process, so a build outright fails if something isn't right.

    Read the article

  • PHP specifying a fixed include source for scripts in different directories

    - by Extrakun
    I am currently doing unit testing, and use folders to organize my test cases. All cases pertaining to managing of user accounts, for example, go under \tests\accounts. Over time, there are more test cases, and I begin to seperate the cases by types, such as \tests\accounts\create, \tests\account\update and etc. However, one annoying problem is I have to specify the path to a set of common includes. I have to use includes like this: include_once ("../../../../autoload.php"); include_once ("../../../../init.php"); A test case in tests\accounts\ would require change to the include (one less directory level down). Is there anyway to have them somehow locating my two common includes? I understand I could set include paths within my PHP's configurations, or use server environment variables, but I would like to avoid such solutions as they make the application less portable and coupled with another layer which the programmer can't control (some web-host doesn't allow configuration of PHP's configuration settings, for example)

    Read the article

  • Jenkins plugin for different types of slaves

    - by user1195996
    We have some tests that need to be run on multiple types of specific hardware. Its possible that these tests might pass on some pieces of hardware but fail on others, and we want to know where they work and where they fail. So, for certain tests, we would like to provide a list of hardware they need to be tested on. We'd like to put all the needed hardware in a pool that Jenkins has access to, and then have Jenkins run the right tests on the right hardware, depending on the hardware list that comes with the test. And of course we'd like to keep track of which test worked where. Is there a plugin for Jenkins to be able to handle this sort of thing? Has anyone else solved this sort of problem?

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • JUnit test failing - complaining of missing data that was just inserted

    - by Collin Peters
    I have an extremely odd problem in my JUnit tests that I just can't seem to nail down. I have a multi-module java webapp project with a fairly standard structure (DAO's, service clasess, etc...). Within this project I have a 'core' project which contains some abstracted setup code which inserts a test user along with the necessary items for a user (in this case an 'enterprise', so a user must belong to an enterprise and this is enforced at the database level) Fairly simple so far... but here is where the strangeness begins some tests fail to run and throw a database exception where it complains that a user cannot be inserted because an enterprise does not exist. But it just created the enterprise in the preceding line of code! And there was no errors in the insertion of the enterprise. Stranger yet, if this test class is run by itself everything works fine. It is only when the test is run as part of the project that it fails! And the exact same abstracted code was run by 10+ tests before the one that fails! f I have been banging my head against a wall with this for days and haven't really made any progress. I'm not even sure what information to offer up to help diagnose this. Using JUnit 4.4, Spring 2.5.6, iBatis 2.3.0, Postgresql 8.3 Switching to org.springframework.jdbc.datasource.DriverManagerDataSource from org.apache.commons.dbcp.BasicDataSource changed the problem. Using DriverManagerDataSource the tests work for the first time, but now all of a sudden a lot of data isn't rolled back in the database! It leaves everything behind. All with no errors Tests fail when run via Eclipse & Maven Please ask for any info which may help me solve my problem!

    Read the article

  • correct approach to store in database

    - by John
    I'm developing an online website (using Django and Mysql). I have a Tests table and User table. I have 50 tests within the table and each user completes them at their own pace. How do I store the status of the tests in my DB? One idea that came to my mind is to create an additional column in User table. That column containing testid's separated by comma or any other delimiter. userid | username | testscompleted 1 john 1, 5, 34 2 tom 1, 10, 23, 25 Another idea was to create a seperate table to store userid and testid. So, I'll have only 2 columns but thousands of rows (no of tests * no of users) and they will always continue to increase. userid | testid 1 1 1 5 2 1 1 34 2 10

    Read the article

  • MSBuild CreateItem condition include based on config file

    - by Mac
    I'm trying to select a list of test dlls that contain corresponding config files MyTest.Tests.dll MyTest.Tests.config I have to use a createItem as the dlls are not available at the time of the script loading <CreateItem Include="$(AssemblyFolder)\*.Tests.dll" Condition="???" <Output TaskParameter="Include" ItemName="TestBinariesWithConfig"/> </CreateItem> Is there a condition I can use or is this the wrong approach? Thanks Mac

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • How to do concurrent modification testing for grails application

    - by werner5471
    I'd like to run tests that simulate users modifying certain data at the same time for a grails application. Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel. I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.

    Read the article

  • fortran error I/O

    - by jpcgandre
    I get this error when compiling: forrtl: severe (256): unformatted I/O to unit open for formatted transfers, unit 27, file C:\Abaqus_JOBS\w.txt The error occurs in the beginning of the analysis. At the start, the file w.txt is created but is empty. The error may be related to the fact that I want to read from an empty file. My code is: OPEN(27, FILE = "C:/Abaqus_JOBS/w.txt", status = "UNKNOWN") READ(27, *, iostat=stat) w IF (stat .NE. 0) CALL del_file(27, stat) SUBROUTINE del_file(uFile, stat) IMPLICIT NONE INTEGER uFile, stat C If the unit is not open, stat will be non-zero CLOSE(unit=uFile, status='delete', iostat=stat) END SUBROUTINE Ref: Close multiple files If you agree with my opion about the cause of the error, is there a way to solve it? Thanks

    Read the article

  • Is there a way to undo Mocha stubbing of any_instance?

    - by Steve Weet
    Within my controller specs I am stubbing out valid? for some routing tests, (based on Ryan Bates nifty_scaffold) as follows :- it "create action should render new template when model is invalid" do Company.any_instance.stubs(:valid?).returns(false) post :create response.should render_template(:new) end This is fine when I test the controllers in isolation. I also have the following in my model spec it "is valid with valid attributes" do @company.should be_valid end Again this works fine when tested in isolation. The problem comes if I run spec for both models and controllers. The model test always fails as the valid? method has been stubbed out. Is there a way for me to remove the stubbing of any_instance when the controller test is torn down. I have got around the problem by running the tests in reverse alphabetic sequence to ensure the model tests run before the controllers but I really don't like my tests being sequence dependant.

    Read the article

  • convert an int to list of individual digitals more faster?

    - by user478514
    All, I want define an int(987654321) <= [9, 8, 7, 6, 5, 4, 3, 2, 1] convertor, if the length of int number < 9, for example 10 the list will be [0,0,0,0,0,0,0,1,0] , and if the length 9, for example 9987654321 , the list will be [9, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> i 987654321 >>> l [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> z = [0]*(len(unit) - len(str(l))) >>> z.extend(l) >>> l = z >>> unit [100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1] >>> sum([x*y for x,y in zip(l, unit)]) 987654321 >>> int("".join([str(x) for x in l])) 987654321 >>> l1 = [int(x) for x in str(i)] >>> z = [0]*(len(unit) - len(str(l1))) >>> z.extend(l1) >>> l1 = z >>> l1 [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> a = [i//x for x in unit] >>> b = [a[x] - a[x-1]*10 for x in range(9)] >>> if len(b) = len(a): b[0] = a[0] # fix the a[-1] issue >>> b [9, 8, 7, 6, 5, 4, 3, 2, 1] I tested above solutions but found those may not faster/simple enough than I want and may have a length related bug inside, anyone may share me a better solution for this kinds convertion? Thanks!

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >