Does software testing methodology rely on flawed data?

Posted by Konrad Rudolph on Programmers See other posts from Programmers or by Konrad Rudolph
Published on 2012-09-04T21:25:53Z Indexed on 2012/09/04 21:50 UTC
Read the original article Hit count: 402

It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications.

However, it turns out that this data never existed. The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost).

Is there any independent data to corroborate or refute this?

And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology?

© Programmers or respective owner

Related posts about testing

Related posts about development-process