Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 6/547 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Introduce unit testing when codebase is already available

    - by McMannus
    I've been working on a project in Flex for three years now without unit testing. The simple reason for that is the fact that I just didn't realize the importance of unit testing when being at the beginning of studies at university. Now my attitude towards testing changed completely and therefore I want to introduce it to the existing project (about 25000LOC). In order to do it, there are two approaches to choose from: 1) Discard the existing codebase and start from scratch with TDD 2) Write the tests and try to make them pass by changing the existing code Well, I would appreciate not having to write everything from scratch but I think by doing this, the design would be much better. What would you advise me to do? Thanks for replies in advance! Jan

    Read the article

  • "Testing Plan Lite" for web project

    - by Emmmmm
    How do you draft a quick & easy "Testing Plan Lite" for a medium-sized web project (70k lines, 2 developers)? I've seen many tutorials/articles on methods of testing, but all seem cumbersome. For us, the goal is to be able to be able to divide up and delegate testing instructions to our friends for different project segments, browsers, etc. What's the quick & easy way to write test plans for web apps? (the 20 of the 20/80 rule) Thanks!

    Read the article

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • web application load / stress testing services

    - by Booji Boy
    Can you recommend reputable companies that offer help (consulting services, etc) in load testing (ASP.NET) web applications? We have a client looking to load test an ASP.NET application and we don't have any expertise in load testing web applications. The client is located in central Massachusetts. My employer http://www.goADNET.com was looking for an option besides, “I can figure out how to do it”.

    Read the article

  • What should come first: testing or code review?

    - by Silver Light
    Hello! I'm quite new to programming design patterns and life cycles and I was wondering, what should come first, code review or testing, regarding that those are done by separate people? From the one side, why bother reviewing code if nobody checked if it even works? From the other, some errors can be found early, if you do the review before testing. Which approach is recommended and why? Thank you!

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Jasmine BDD vs Integration Tests

    - by lfender6445
    Lets say I need to write a test for the front end. A user visits buysomething.com, saves something to their wishlist, and a saved item count is updated. DOM gets manipulated. In my heart I feel this is better suited as an integration test - but my team is currently using jasmine to load fixtures and test such interactions. This leads to extremely brittle tests as they are reliant on a static fixture instead of the actual markup. Are we misusing jasmine here?

    Read the article

  • Model Based Testing Strategies

    - by Doubt
    What strategies have you used with Model Based Testing? Do you use it exclusively for integration testing, or branch it out to other areas (unit/functional/system/spec verification)? Do you build focused "sealed" models or do you evolve complex onibus models over time? When in the product cycle do you invest in creating MBTs? What sort of base test libraries do you exclusively create for MBTs? What difference do you make in your functional base test libraries to better support MBTs?

    Read the article

  • How does the workflow between testers doing testing and coders doing the coding for pending testing

    - by dotnetdev
    In a large company that does software development, they often have dedicated teams for build management, testing, development, and so forth. Agile or not, how does this workflow amongst teams work? I mean would the test team write unit tests and then the dev team write code to adhere to these tests (basically TDD)? And then the test team may write tests for a completely different project or have a slight quiet period until the dev team have done their coding. What possible workflows are there? This is something that interests me greatly. I know that in my current company we are doing it incorrectly (we have 1 tester about 5 devs, which is small scale) but I am not sure how exactly to draw out the ideal workflow. Many (ok, an ex-Project Manager) have tried, but all failed.

    Read the article

  • Web Performance testing using VS2010 "Testing a file download"

    - by cheedep
    Hi All, I am trying out the VS 2010 testing tools for the first time. And I tried recording a web performance test and my actions had a file download implemented as in the KB article here http://support.microsoft.com/kb/812406 by streaming chunks of 10000 bytes. However my test is failing at the download saying "The response stream has been closed". Please help me understand why it is happening this way also any suggestions how you would test such a file download. My main aim was to see how the download was performing for a load test with Intercontinental 350kbps connection on files of about 30-50 MB. Thanks.

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • Dog-1 testing as a synonym for integration / system test

    - by SkeetJon
    In the company I'm working for the phrase Dog-1 is used for the software testing scenario where the first piece of data passes through the complete system. Has anyone heard this phrase in a similar context? There's nothing on Google about it in a software development context, so I'm guessing its a idiosyncrasy of my current environment. Is there a recognized phrase in the industry for this kind of test? I don't think it's simply a 'system test' / 'integration test' as 'Dog-1' refers specifically to the first item/unit through the system.

    Read the article

  • When is type testing OK?

    - by svidgen
    Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a SuperType, we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the SuperType and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own doSomething() implementation. The rest of our application can then be appropriately ignorant of whether any given SuperType is really a SubTypeA or a SubTypeB. Wonderful. But, we're still given is a-like operations in most, if not all, type-safe languages. And that seems suggests a potential need for explicit type testing. So, in what situations, if any, should we or must we perform explicit type testing? Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a need to test types outside my cowboy JavaScript.

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • Unit testing to prove balanced tree

    - by Darrel Hoffman
    I've just built a self-balancing tree (red-black) in Java (language should be irrelevant for this question though), and I'm trying to come up with a good means of testing that it's properly balanced. I've tested all the basic tree operations, but I can't think of a way to test that it is indeed well and truly balanced. I've tried inserting a large dictionary of words, both pre-sorted and un-sorted. With a balanced tree, those should take roughly the same amount of time, but an unbalanced tree would take significantly longer on the already-sorted list. But I don't know how to go about testing for that in any reasonable, reproducible way. (I've tried doing millisecond tests on these, but there's no noticeable difference - probably because my source data is too small.) Is there a better way to be sure that the tree is really balanced? Say, by looking at the tree after it's created and seeing how deep it goes? (That is, without modifying the tree itself by adding a depth field to each node, which is just wasteful if you don't need it for anything other than testing.)

    Read the article

  • Implementing unit testing at a company that doesn't do it

    - by Pete
    My company's head of software development just "resigned" (i.e. fired) and we are now looking into improving the development practices at our company. We want to implement unit testing in all software created from here on out. Feedback from the developers is this: We know testing is valuable But, you are always changing the specs so it'd be a waste of time And, your deadlines are so tight we don't have enough time to test anyway Feedback from the CEO is this: I would like our company to have automated testing, but I don't know how to make it happen We don't have time to write large specification documents How do developers get the specs now? Word of mouth or PowerPoint slide. Obviously, that's a big problem. My suggestion is this: Let's also give the developers a set of test data and unit tests That's the spec. It's up to management to be clear and quantitative about what it wants. The developers can put it whatever other functionality they feel is needed and it need not be covered by tests Well, if you've ever been in a company that was in this situation, how did you solve the problem? Does this approach seem reasonable?

    Read the article

  • Oracle Business Intelligence integration with Oracle Open Office

    - by Harald Behnke
    A highlight of the latest Oracle Office product launches are the first Oracle application connectors introduced with Oracle Open Office 3.3. The Oracle Open Office Connector for Oracle Business Intelligence perfectly demonstrates the advantages of enterprise and office productivity software engineered to work together. The connector enables you to access and run Oracle Business Intelligence Enterprise Edition requests directly within Oracle Open Office. The refreshable requests leverage not only native Open Office functionality but also the scalability and performance of the Oracle Oracle Business Intelligence server (R10.x). The requests reference a single source of information as defined in the Oracle Business Intelligence server data thus ensuring consistent information across the enterprise. See how it works in the demo video: Beyond the dramatic license cost savings for Oracle Business Intelligence customers using Oracle Open Office, the joint engineering efforts result in usability and efficiency benefits not available with Microsoft Office: Import styles and conditional formats defined in Business Intelligence answersApply customized styles, direct or conditional formats to Oracle Business Intelligence data - all changes are preserved during refreshChange chart properties for Oracle Open Office charts - all changes are preserved during refresh Read more about the Oracle Open Office enterprise features.

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Continuous integration never results in build errors

    - by Jon
    Hi, I'm working with a variety of Java EE websites which use internal libraries we've developed. For each website, we only upgrade to new versions of our internal libraries as needed, and before committing we make sure that the site compiles fine. What this means is that when TeamCity does a build of one of our sites, the site compiles fine, but later when the site is updated to the latest version of internal libraries, there might be a compile error. Is there a good way to handle this? We're not using Maven yet; would using Maven mean that our websites could automatically use the latest version of internal libraries? Thanks. Clarification: What we sometimes run into is this: Project A depends on a library, and is currently using library version 1.0 Project B also depends on that library. I make changes to the library so that it is now version 1.5. Project B now uses 1.5. Project A and project B have both been built just fine by the CI server (TeamCity) Working on project A again, I update to 1.5 and discover that 1.5 has breaking changes in it. Is there a way for the CI server to discover these kinds of breaking changes?

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • How to do concurrent modification testing for grails application

    - by werner5471
    I'd like to run tests that simulate users modifying certain data at the same time for a grails application. Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel. I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >