5 x 8 x 3 = 140?
Before talking about complex stuff like software validation, maybe you should start with your multiplication tables?
The common assumption about software testing is that "more is better", and testing all the possible states and variable combinations guarantees you will find all the bugs. In the real world, however, there is not enough time or enough testers to test every combination of every variable. Not all bugs will be found, making …
"Great Idea!", I thought, "I could script all that and have my test plan produced in minutes"
Then I looked again at line 2 of figure 24.
Access | *Linux* | Web | IE
How can I test that combination?
(Yes I appreciate your example was a simplified illustration and this serves as a warning to check for more subtle issues when doing it for real)
... they leave it up to their customers to "test" and debug Apple POS buggy software and OS X. Here's just a sample today! -
QuickTime 7.2 breaking Rosetta CFM apps -
iTunes 7.3.1 (#3): Problems syncing with iPhone -
iPhone batteries don't appear to fully charge - (Oops that's shitty Apple hardware)
Special Report: Troubleshooting Mac OS X 10.4.10 -
And these A Holes in Stupertino California have the balls to run those bash Microsoft Ads! Bwah ha ha ha ha hah!
The article assumes that (for example), Internet Explorer is a single product (with one set of expected behaviours) across any platform it is deployed on.
Sadly, that is not the case.
IE for Apple differs in many ways from that under Windows. IE5, IE5.5, IE6, IE7 are quite different products, as are their point variants. Also their behaviour alters according to the presence (and value of) a DOCTYPE header.
All that adds numerous columns for what the article presents as a single entity
The article only uses IE as part of the example, if it was to use all versions of IE it would clog the diagrams. The constant of note is that the number of tests is reduced. If you followed the same route with different versions of IE you would get the same conclusion, the path is irrelevant and only there to illustrate the point.
140 = 5 * 8 * 3 + (5 + 8 + 3) + (1 + 1 + 1) + (1)
Brackets for clarity. 5 + 8 + 3 for when one or two other settings are either outside bounds or unused, 1 + 1 + 1 for the settings outside bounds or unused (1 for each out of bounds setting) and + 1 for when all settings are outside bounds or unused. Unlikely in this situation, but I think that's how 140 was found.
Interesting, the pairwise thing is certainly something to keep in mind. 24 tables to show the concept is maybe a bit excessive.
However, combinations are one thing - how about an effective approach at determining "dangerous" values in larger ranges where exhaustive tests are not even realistic for a single variable - like for floats and large integer ranges ? (Not counting through evaluation of border conditions like expressions evaluating to zero as effective)
It is not news that the testing of software requires a large number of combinations of inputs (for black-box testing) or of combinations of paths (for white-box testing). "Exhaustive" testing (in the sense that all possible occurrences have been tried) is obviously impossible for anything other than the simplest programs.
If you want a real example of a fault that only caused a failure under unusual conditions, I discovered that a failure occurred with my department's web-based diary system whenever I used it between 12:00 and 01:00 GMT to look at a week that lay within BST.
Generally, I would recommend random testing (i.e., choosing test cases according to a realistic operational profile) in combination with a reliability growth model to recognise the stopping point, determined in advance as an acceptably low rate of finding new faults.
Biting the hand that feeds IT © 1998–2021