Search Results

Search found 26146 results on 1046 pages for 'white box testing'.

Page 43/1046 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Box 2D Collision Question

    - by Farooq Arshed
    I am very new to Box 2D Physics world. I wanted to know how to collide 2 bodies when one is Dynamic and other is Kinematic. The whole Scenario is explained below: I have 3 balls in total. I want to balls to remain in their places and the third ball to be able to move. When the third ball hits the other two balls then they should move according to the speed and direction from which they were hit. My gravity of the world is 0 because I only want z-axis gravity. I would also like some one to point me towards some good tutorials regarding Box 2D basics which is language independent. I hope I have explained my scenario well. Thanks for the help in advance.

    Read the article

  • How should we setup up complex situations for tests?

    - by ShaneC
    I'm currently working on what I would call integration tests. I want to verify that if a WCF service is called it will do what I expect. Let's take a very simple scenario. Assume we have a contract object that we can put on hold or take off hold. Now writing the put on hold test is quite simple. You create a contract instance and execute the code that puts it on code. The question I have comes when we want to test the taking off hold service call. The problem is that putting a contract on hold can be actually quite complicated leading to various objects all be modified. So usually I would use the Builder pattern and do something like this.. var onHoldContract = new ContractBuilder().PutOnHold().Build(); The problem I have with this is now I have to pretty much replicate a large part of my put on hold service. Now when I change what putting something on hold means I have two places I have to modify. The other option that immediately jumps out at me is to just use the put on hold service as part of my test setup but now I'm coupling my test to the success of another piece of code which is something I don't like to do since it can lead to failures in one spot breaking unrelated tests elsewhere (if put on hold failed for example). Any other options I'm missing out here? or opinions on which method is preferable and why?

    Read the article

  • Table Disobeys W3C Box Model, Ie8 Ignores Fixed Table Width !

    - by Axel Myers
    Hi, I'm having hard time with tables and column widths. Update: I'm using XHTML Strict 1.0. The page is: http://www.pro-turk.net/try The first problem I have is, I have a column with a fixed width of 100px and 4px padding, but it disobeys my padding depending on the value. The column width (as the distance between two borders according to W3C Box Model) is 156 px even if padding is 0 or 4. Only the position of the text changes. According to W3C Box Model ( available at www.pro-turk.net/box_model.png ), borders and paddings aren't included in WIDTH attribute, so why does it render wrongly ? The second problem is, when you look the page I gave with IE8, the first cell in the second row has 150px fixed width, but ie shows it about 50% of the total table width regardless of what i say.

    Read the article

  • Why are all response bodies after the first blank in Cucumber?

    - by James A. Rosen
    I'm using Cucumber (0.6.3), Cucumber-Rails (0.3.0), Webrat (0.7.0), and Rails (2.3.5) for some tests. The following scenario passes just fine: Scenario: load one page Given I am on the home page Then I should see "Welcome" The following, however, fails: Scenario: load two pages Given I am on the FAQ pag When I go to the home page Then I should see "Welcome" The problem is that the second @response.body is blank. I added a Rack middleware to get a little more information: class LogEachRequest def initialize(app); @app = app; @count = 0; end def call(env) puts "Processing request # #{@count += 1)" @app.call(env) end end It shows me only one request processed. That is, it only ever prints out Processing request # 1

    Read the article

  • Why is django.test.client.Client not keeping me logged in.

    - by Mystic
    I'm using django.test.client.Client to test whether some text shows up when a user is logged in. However, I the Client object doesn't seem to be keeping me logged in. This test passes if done manually with Firefox but not when done with the Client object. class Test(TestCase): def test_view(self): user.set_password(password) user.save() client = self.client # I thought a more manual way would work, but no luck # client.post('/login', {'username':user.username, 'password':password}) login_successful = client.login(username=user.username, password=password) # this assert passes self.assertTrue(login_successful) response = client.get("/path", follow=True) #whether follow=True or not doesn't seem to work self.assertContains(response, "needle" ) When I print response it returns the login form that is hidden by: {% if not request.user.is_authenticated %} ... form ... {% endif %} This is confirmed when I run ipython manage.py shell. The problem seems to be that the Client object is not keeping the session authenticated.

    Read the article

  • Role change from Software Testing to Business Analyst [closed]

    - by Ankit
    After working for 4 years in software testing, I have finally got a chance to switch my career to BA profile. Well it has been my dream to get a BA profile. But, as I prepare my self to switch to a new profile and a new city. I ask myself is it really worth taking the risk. I am fairly senior in testing role and make a good amount of money. But, the charm of BA profile is too good to miss. Any comments ? Any suggestions ?

    Read the article

  • Problems using User model in django unit tests

    - by theycallmemorty
    I have the following django test case that is giving me errors: class MyTesting(unittest.TestCase): def setUp(self): self.u1 = User.objects.create(username='user1') self.up1 = UserProfile.objects.create(user=self.u1) def testA(self): ... def testB(self): ... When I run my tests, testA will pass sucessfully but before testB starts, I get the following error: IntegrityError: column username is not unique It's clear that it is trying to create self.u1 before each test case and finding that it already exists in the Database. How do I get it to properly clean up after each test case so that subsequent cases run correctly?

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Integration tests - "no exceptions are thrown" approach. Does it make sense?

    - by Andrew Florko
    Sometimes integration tests are rather complex to write or developers have no enough time to check output - does it make sense to write tests that make sure "no exceptions are thrown" only? Such tests provide some input parameters set(s) and doesn't check the result, but only make sure code not failed with exception? May be such tests are not very useful but appropriate in situations when you have no time?

    Read the article

  • The Properties dialog box not working for any Start Menu Program Lubuntu 14.04

    - by user236378
    When Lubuntu 14.04 first came out, the Properties dialog was present for all Start Menu Programs but did not work when right-clicked on as nothing happened and no other box popped up the the Start Menu Program information, such as the executable info, etc. Then the issue was fixed. Now the issue has returned when I upgraded to Lubuntu 14.04.1. Is there any way to retrieve the Properties info pop-up box via a configuration file? Does the issue have to do with a faulty LXShortcut? I noticed when I uninstalled the LXShortcut the Properties tab disappeared. When I re-installed the Properties tab returned however once again when you right-click on the Properties tab nothing happens. Any assistance would be greatly appreciated. Thank you.

    Read the article

  • Is there an effective way to test XSL transforms/BizTalk maps?

    - by nlawalker
    Creating repeatable tests for BizTalk maps is frustrating. I can't find a way to handle testing them like I'd do unit testing, because I can't find ways to break them into logical chunks. They tend to be one big monolithic unit, and any change has the potential to ripple through the map and break a lot of unit tests. Even if I could break it up, creating XML test inputs is painful and error prone. Is there any effective way of testing these? I'd settle for recommendations for testing XSL transforms in general, but I specifically mention BizTalk maps primarily for the reason that when using the mapper, there really isn't any way to break your XSLT into templates (which I'd imagine you could use to break up your logic into testable chunks, but I've honestly never gotten that far with XSLT).

    Read the article

  • NUnit - Multiple properties of the same name? Linking to requirements

    - by Ryan Ternier
    I'm linking all our our System Tests to test cases and to our Requirements. Every requirement has an ID. Every Test Case / System Tests tests a variety of requirements. Every module of code links to multiple requirements. I'm trying to find the best way to link every system test to its driving requirements. I was hoping to do something like: [NUnit.Framework.Property("Release", "6.0.0")] [NUnit.Framework.Property("Requirement", "FR50082")] [NUnit.Framework.Property("Requirement", "FR50084")] [NUnit.Framework.Property("Requirement", "FR50085")] [TestCase(....)] public void TestSomething(string a, string b...) However, that will break because Property is a Key-Value pair. The system will not allow me to have multiple Properties with the same key. The reason I'm wanting this is to be able to test specific requirements in our system if a module changes that touches these requirements. Rather than run over 1,000 system tests on every build, this would allow us to target what to test based on changes done to our code. Some system tests run upwards of 5 minutes (Enterprise healthcare system), so "Just run all of them" isn't a viable solution. We do that, but only before promoting through our environments. Thoughts?

    Read the article

  • Jquery dialog box - doesn't fade out before closing

    - by Broken Link
    I have div(box) on my page and I'm using this script to display div as dialog box. Inside that div I have a hyper link, On click of the hyper link I want to fade out the dialog box and close.. The content of the dialog fades out, but the border of the dialog box remains same. If I add $("#box").dialog('close') to the click function after fadeto there is no effect.. it just closes the dialog box completely. Any help? using jquery-ui-1.7.2 <script type="text/javascript"> $(document).ready(function(){ $("a#later").click(function () { $("#box").fadeTo('slow', 0); }) }); $(function () { $("#box").dialog({ autoOpen: true, width: 500, modal: true, }); }); </script>

    Read the article

  • Specify test method name prefix for test suite in junit 3

    - by Marko Kocic
    Is it possible to tell JUnit 3 to use additional method name prefix when looking up test method names? The goal is to have additional tests running locally that should not be run on continuous integration server. CI server doesn't use test suites, it look up for all classes which name ends with "Test" and execute all methods that begins with "test". The goal is to be able to locally run not only tests run by integration server, but also tests which method name starts with, for example "nocitest" or something like that. I don't mind having to organize tests into tests suite locally, since CI is just ignoring them.

    Read the article

  • Best way to do TDD in express versions of visual studio(eg VB Express)

    - by Nathan W
    I have been looking in to doing some test driven development for one of the applications that I'm currently writing(OLE wrapper for an OLE object). The only problem is that I am using the express versions of Visual Studio(for now), at the moment I am using VB express but sometimes I use C# express. Is it possible to do TDD in the express versions? If so what are the bast was to go about it? Cheers. EDIT. By the looks of things I will have to buy the full visual studio so that I can do integrated TDD, hopefully there is money in the budget to buy a copy :). For now I think I will use Nunit like everyone is saying.

    Read the article

  • Android Test testPreconditions

    - by user1184113
    In Android developers I've seen that testPreconditions() method is supposed to be launch before all tests. But in my app test, it's acting like a normal test. It does not run before all tests. Is there something wrong ? Here is the description about testPreconditions() from android developer : "A preconditions test checks the initial application conditions prior to executing other tests. It's similar to setUp(), but with less overhead, since it only runs once."

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >