black-box testing techniques in software engineering — csmates.com

Sonu Tewatia
4 min readFeb 12, 2021

--

Equivalence testing, combined with boundary value analysis, is a black-box technique of selecting test cases in such a way that new cases are chosen to detect previously undetected faults.

The cause-effect graph overcomes the weakness of the above two methods of not considering potential combinations of input/output conditions.

Equivalence Class Partitioning

This method divides the input of a program into classes of data. The test case design is based on defining an equivalent class for a particular input. An equivalence class represents a set of valid and invalid input values.

An equivalence class is a set of test cases such that any one member of the class is representative of any other member of the class.

Suppose the specifications for a database product state that the product must be able to handle any number of records from 1 through 16,383. If the product can handle 34 records and 14,870 records, then the chances are good that it will look fine for, say, 8534 records. If the product works correctly for anyone’s test case in the range of 1 to 16,383, then it will probably work for any other test case in the range. The range from 1 to 16,383 constitutes an equivalence class.

For the product, 3 equivalence classes are:

1. Less than one record.

2. From 1 to 16,383 records.

3. More than 16,383 records.

Testing the database product then requires that one test class from each equivalence class be selected.

Guidelines for Equivalence Partitioning

Equivalence classes may be defined according to the following guidelines.

  • Range: If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
  • Specific Value: If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
  • Member of Set: If an input condition specifies a member of the set, one valid and one invalid equivalence class is defined.
  • Boolean: If an input condition is Boolean, one valid and one invalid class are defined.

Boundary Value Analysis

Boundary value analysis (BVA) is complementary to equivalence partitioning. Rather than selecting arbitrary input values to partition the equivalence class, the test case designer chooses values at the extremes of the class.

BVA also encourages test case designers to look at output conditions and design test cases for the extreme conditions in output.

A successful test case is one that detects a previously undetected fault. In order to maximize the chances of finding a new fault, a high-payoff technique is Boundary Value Analysis.

Experience has shown that when a test case on or just to one side of a boundary of an equivalence class is selected, the probability of detecting a fault increases. Thus, when testing the database product.

The use of equivalence classes, together with boundary value analysis, is a valuable technique for generating a relatively small set of test data with a high probability of uncovering most faults.

Guidelines for performing BVA.

Maximum and minimum Numbers:

  • If an input condition specifies the number of values, test cases should be developed that exercise the minimum and maximum numbers.
  • Values above and below the minimum and maximum are also tested.

Cause-Effect Graphs

A drawback of the previous two methods is that they don’t consider potential combinations of input/output conditions. Cause-effect graphs connect input classes to output classes yielding a directed graph.

Cause-effect graphing is a test case design approach that offers a concise depiction of logical conditions and associated actions.

A simplified version of cause-effect graph symbology is shown here. The left-hand column of the figure gives the various logical associations among causes and effects. The dashed notation in the right-hand columns indicates potentials constraining associations that might apply to either cause or effects.

Sample symbols used for drawing cause-effect graphs are:

Important guidelines for cause-effect graphs.

  • Causes and effects are listed for modules and an identifier is assigned to each.
  • A cause-effect graph is developed.
  • The graph is converted to a decision table.
  • Decision table rules are converted to test cases.

Cause-effect graphs translate equivalence partitions into decision tables via Boolean operator descriptions of the output conditions in terms of the input variables. Test data can be generated from the decision table form, reducing the number required. Some authors state that partition testing is more effective than testing with randomly generated data. However, random testing is more cost-effective in terms of time and manpower.

Comparison Testing

For critical applications requiring fault-tolerance, a number of independent versions of the software are developed for the same specifications. In that case, test cases using black-box methods are applied to each version.

If the output from each version is the same, then it is assumed that all implementations are correct. If the output is different, each version is examined to see if is responsible for the differing output.

Comparison testing is not foolproof. If the specification applied to all versions is incorrect, all versions will likely reflect the error, and there may be the same output.

Originally published at https://www.csmates.com.

--

--

Sonu Tewatia

Software Engineer | Content Writer | Programmer | System Admin | Linux Administration |