Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 104/500 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Testing code that uses SoftReference<T>

    - by bmargulies
    To get any code with SoftReference<T> to be fully tested, one must come up with some way to test the 'yup, it's been nulled' case. One might more or less mock this by using a 'for-test' code path to force the reference to be null, but that won't manage the queue exactly as the GC does. I wonder if anyone out can share experience in setting up a repeatable, controlled, environment, in which the GC is, in fact, provoked into collecting and nulling?

    Read the article

  • Test assertions for tuples with floats

    - by Space_C0wb0y
    I have a function that returns a tuple that, among others, contains a float value. Usually I use assertAlmostEquals to compare those, but this does not work with tuples. Also, the tuple contains other data-types as well. Currently I am asserting every element of the tuple individually, but that gets too much for a list of such tuples. Is there any good way to write assertions for such cases?

    Read the article

  • Website redirect during maintenece but still with testing acces

    - by jme
    I have an online store that i have recently re-written most of and would like to upload it to my server. While the maintenance is taking place, i would like to redirect all visitors to an "under construction" page. (easily done with php or apache htacess etc...) The issue is that I would like to test everything when i upload it so i still need access while blocking everyone else. I was thinking some php page that is open to all with a cookie flag i could set for just myself? What is the best way to do this? Thanks jme

    Read the article

  • Is Assert.Fail() considered bad practice?

    - by Mendelt
    I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.

    Read the article

  • Log information inside a JUnit Suite

    - by Alex Marinescu
    I'm currently trying to write inside a log file the total number of failed tests from a JUnite Suite. My testsuite is defined as follows: @RunWith(Suite.class) @SuiteClasses({Class1.class, Class2.class etc.}) public class SimpleTestSuite {} I tried to define a rule which would increase the total number of errors when a test fails, but apparently my rule is never called. @Rule public MethodRule logWatchRule = new TestWatchman() { public void failed(Throwable e, FrameworkMethod method) { errors += 1; } public void succeeded(FrameworkMethod method) { } }; Any ideas on what I should to do to achieve this behaviour?

    Read the article

  • Multiple asserts in single test?

    - by Gern Blandston
    Let's say I want to write a function that validates an email address with a regex. I write a little test to check my function and write the actual function. Make it pass. However, I can come up with a bunch of different ways to test the same function ([email protected]; [email protected]; test.test.com, etc). Do I put all the incantations that I need to check in the same, single test with several ASSERTS or do I write a new test for every single thing I can think of? Thanks!

    Read the article

  • Testing file uploading and downloading speed using FTP

    - by Toman
    Hi all, I am working in a desktop application using java. In my application i have to perform a speed test which will show the file uploading and downloading speed. For uploading test i am uploading a small test file to a FTP server and based on time taken i am calculating the file upload speed. similarly i am downloading a test file form server and calculating download speed. But result i am getting doesn't match with actual FTP file uploading and downloading speed.it seems that the establishing connection to FTP server is increasing the time, hence the resultant speed i am calculating is less. Could you suggest any link or some way to get nearest uploading and downloading speed. i thanks to all your valuable suggestion.

    Read the article

  • How to test a site rigorously?

    - by Sarfraz
    Hello, I recently created a big portal site. It's time for putting it to test. How do you guys test a site rigorously? What are the ways and tools for that? Can we sort of mimic hundreds of virtual users visiting the site to see its load handling? The test should be for both security and speed Thanks in advance.

    Read the article

  • Using Assert to compare two objects

    - by baron
    Hi everyone, Writing test cases for my project, one test I need is to test deletion. This may not exactly be the right way to go about it, but I've stumbled upon something which isn't making sense to me. Code is like this: [Test] private void DeleteFruit() { BuildTestData(); var f1 = new Fruit("Banana",1,1.5); var f2 = new Fruit("Apple",1,1.5); fm.DeleteFruit(f1,listOfFruit); Assert.That(listOfFruit[1] == f2); } Now the fruit object I create line 5 is the object that I know should be in that position (with this specific dataset) after f1 is deleted. Also if I sit and debug, and manually compare objects listOfFruit[1] and f2 they are the same. But that Assert line fails. What gives?

    Read the article

  • testing helpers with 'haml_tag'

    - by crankharder
    module FooHelper def foo haml_tag(:div) do haml_content("bar") end end end When I test this I get: NoMethodError: undefined method `haml_tag' This code is perfectly valid and works in a development/production environment. It's something to do with having the haml helpers properly loaded in the test environment. Thanks!

    Read the article

  • JUnit terminates child threads

    - by Marco
    Hi to all, When i test the execution of a method that creates a child thread, the JUnit test ends before the child thread and kills it. How do i force JUnit to wait for the child thread to complete its execution? Thanks

    Read the article

  • Get/save parameters to an expected JMock method call?

    - by Tayeb
    Hi, I want to test an "Adapter" object that when it receives an xml message, it digest it to a Message object, puts message ID + CorrelationID both with timestamps and forwards it to a Client object.=20 A message can be correlated to a previous one (e.g. m2.correlationID =3D m1.ID). I mock the Client, and check that Adapter successfully calls "client.forwardMessage(m)" twice with first message with null correlationID, and a second with a not-null correlationID. However, I would like to precisely test that the correlationIDs are set correctly, by grabing the IDs (e.g. m1.ID). But I couldn't find anyway to do so. There is a jira about adding the feature, but no one commented and it is unassigned. Is this really unimplemented? I read about the alternative of redesigning the Adapter to use an IdGenerator object, which I can stub, but I think there will be too many objects.=20 Don't you think it adds unnecessary complexity to split objects to a so fine granularity? Thanks, and I appreciate any comments :-) Tayeb

    Read the article

  • Java serialization testing

    - by Jeff Storey
    Does anyone know if there is a library that exists to help test if an object graph is fully serializable? It would probably be as simple as writing it out and reading it back in, but I figured someone must have abstracted this already - I just can't find it.

    Read the article

  • xUnit false positive when comparing null terminated strings

    - by mr.b
    I've come across odd behavior when comparing strings. First assert passes, but I don't think it should.. Second assert fails, as expected... [Fact] public void StringTest() { string testString_1 = "My name is Erl. I am a program\0"; string testString_2 = "My name is Erl. I am a program"; Assert.Equal<string>(testString_1, testString_2); Assert.True(testString_1.Equals(testString_2)); } Any ideas?

    Read the article

  • where to put the unittest for library in rails

    - by lidaobing
    Hello, I am a ruby and rails newbie. And I am working on a rails application with RadRails. RadRails has a "Switch to Test" function for my controller, model, etc. but not for my library. if I have class Foo::Bar in /lib/foo/bar.rb, where should I put the unittest for it? or should I separate the foo library in a separated project? Thanks.

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • bash testing a group of directories for existence

    - by Jim Jones
    Have documents stored in a file system which includes "daily" directories, e.g. 20050610. In a bash script I want to list the files in a months worth of these directories. So I'm running a find command find <path>/200506* -type f >> jun2005.lst. Would like to check that this set of directories is not a null set before executing the find command. However, if I use if[ -d 200506* ] I get a "too many arguements error. How can I get around this?

    Read the article

  • Testing whether an event has happened after a period of time in jQuery

    - by chrism
    I'm writing a script for a form-faces xforms product that is keyed off an event built into form faces. The event is called 'xforms-ready'. I have define 'startTime' as happening as soon as the document in 'ready'. What I want the script to do is warn the user that it is taking too long before the 'xforms-ready' happens, say if it's been 6 seconds since 'startTime'. I can easily do things when the 'xforms-ready' event happens using the code below: new EventListener(document.documentElement, "xforms-ready", "default", function() { var endTime = (new Date()).getTime(); } ); however the warning will want to happen before 'endTime' is defined. So I guess I want something that works like this: If 6 seconds has passed since startTime and endTime is not yet defined do X or possibly more efficiently: If 6 seconds has passed since startTime and 'xforms-ready' has not yet happened do X Can anyone suggest a way of doing this?

    Read the article

  • Is there a library available which easily can record and replay results of API calls?

    - by Billy ONeal
    I'm working on writing various things that call relatively complicated Win32 API functions. Here's an example: //Encapsulates calling NtQuerySystemInformation buffer management. WindowsApi::AutoArray NtDll::NtQuerySystemInformation( SystemInformationClass toGet ) const { AutoArray result; ULONG allocationSize = 1024; ULONG previousSize; NTSTATUS errorCheck; do { previousSize = allocationSize; result.Allocate(allocationSize); errorCheck = WinQuerySystemInformation(toGet, result.GetAs<void>(), allocationSize, &allocationSize); if (allocationSize <= previousSize) allocationSize = previousSize * 2; } while (errorCheck == 0xC0000004L); if (errorCheck != 0) { THROW_MANUAL_WINDOWS_ERROR(WinRtlNtStatusToDosError(errorCheck)); } return result; } //Client of the above. ProcessSnapshot::ProcessSnapshot() { using Dll::NtDll; NtDll ntdll; AutoArray systemInfoBuffer = ntdll.NtQuerySystemInformation( NtDll::SystemProcessInformation); BYTE * currentPtr = systemInfoBuffer.GetAs<BYTE>(); //Loop through the results, creating Process objects. SYSTEM_PROCESSES * asSysInfo; do { // Loop book keeping asSysInfo = reinterpret_cast<SYSTEM_PROCESSES *>(currentPtr); currentPtr += asSysInfo->NextEntryDelta; //Create the process for the current iteration and fill it with data. std::auto_ptr<ProcImpl> currentProc(ProcFactory( static_cast<unsigned __int32>(asSysInfo->ProcessId), this)); NormalProcess* nptr = dynamic_cast<NormalProcess*>(currentProc.get()); if (nptr) { nptr->SetProcessName(asSysInfo->ProcessName); } // Populate process threads for(ULONG idx = 0; idx < asSysInfo->ThreadCount; ++idx) { SYSTEM_THREADS& sysThread = asSysInfo->Threads[idx]; Thread thread( currentProc.get(), static_cast<unsigned __int32>(sysThread.ClientId.UniqueThread), sysThread.StartAddress); currentProc->AddThread(thread); } processes.push_back(currentProc); } while(asSysInfo->NextEntryDelta != 0); } My problem is in mocking out the NtDll::NtQuerySystemInformation method -- namely, that the data structure returned is complicated (Well, here it's actually relatively simple but it can be complicated), and writing a test which builds the data structure like the API call does can take 5-6 times as long as writing the code that uses the API. What I'd like to do is take a call to the API, and record it somehow, so that I can return that recorded value to the code under test without actually calling the API. The returned structures cannot simply be memcpy'd, because they often contain inner pointers (pointers to other locations in the same buffer). The library in question would need to check for these kinds of things, and be able to restore pointer values to a similar buffer upon replay. (i.e. check each pointer sized value if it could be interpreted as a pointer within the buffer, change that to an offset, and remember to change it back to a pointer on replay -- a false positive rate here is acceptable) Is there anything out there that does anything like this?

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >