Test Design Techniques: CTFL Tutorial
4.1 Test Design Techniques
Hello and welcome to the Certified Tester Foundation Level (CTFL®) course offered by Simplilearn. This is the fourth lesson of the course, where we will discuss Test Design Techniques. Let us look at the course map in the next screen.
4.2 Course Map
Lesson four is divided into six topics. They are: Common Testing Terms, Test Development Process, Categories of Test Design Techniques, Behavior-Based Techniques, Structure-Based Techniques, and Experience-Based Techniques. Let us begin with the objectives in the next screen.
After completing this lesson, you will be able to: Describe common testing terms, test cases, and test procedures. Explain the process of test development Identify the categories of dynamic test design techniques Explain behavior-based or black box techniques Describe structure-based or white box techniques List the types of experience-based techniques In the next screen, we will begin with the first topic, ‘Common Testing Terms.’
4.4 Common Testing Terms
In this topic we will define the common testing terms, and understand the concepts of test case and procedure with an example. Let us understand the common testing terms, in the next screen.
4.5 Common Testing Terms
An item or an event verifiable by one or more test cases is called test condition. Test condition describes the component to be tested. An example is, testing if a new user is able to register successfully. Test cases are defined to test certain test objectives or test conditions. Test cases consist of a set of input values, execution pre-conditions, expected results, and execution post-conditions. Test cases define testing procedure. A collection of test cases arranged sequentially to execute a test is called test procedure. Sequence of test cases is defined based on priority, technical dependencies, and logical dependencies. Test data refers to the inputs required for executing a test case. Expected results are produced as a consequence of test execution which includes outputs, and changes to data states. Test scripts usually refer to an automated test procedure specification. Let us look at an example of test case, in the next screen.
4.6 Test Case—Example
An example of Modify Description test case is given on the screen. The test condition is Edit Description of Item. It simulates one of the actions a user performs each day. To execute the test, a user ID, which has privileges to update record is required, which is called Pre-condition. The test data are: abc, as Username; xyz, as Password; and TN0765, as Item ID. In step 1, the user invokes the application from the desktop icon and the expected result is to have the application started and the login screen displayed. In step 2, the user logs in using the username and password. The expected result is to have a successful login with the main menu displayed. In step 3, the user selects search from the Menu. The expected result is to have the Search screen displayed. In step 4, the user enters item ID in Find field and then, clicks the Search button. The expected result is to have the Item Properties screen displayed. In step 5, the user clicks the Edit button. The expected result is to have the Item Properties screen displayed in edit mode. In step 6, the user modifies the text in the Description field and clicks the Save button. The expected result is to have the Item Property screen displayed. In step 7, the user clicks the Main Menu button. The expected result is to have the main menu screen displayed. If all the expected results of the test case are met, then, the test case is considered as ‘pass.’ If any expected result is not met, then, the test case is considered as ‘fail.’ Post-condition of the test case is to have the item description updated in the database. Let us discuss an example of Test Procedure, in the next screen.
4.7 Test Procedure—Example
Test procedure usually has a name, test pre-condition, test data, and test post-condition, S.No, test case ID, test case description, and expected results. For example, as seen in the example on the screen, name of the test procedure is, Issue loan to a new account holder; pre-condition is, User must have privileges to create an account for a new account holder, and approve loan application; and test data are: abc, as Username, xyz as Password, and TN0765 as Item ID. The first test case to be executed is, Create new user with test case ID 1 and the expected result is, New user created in the system The second test case to be executed is, “Submit loan application for a personal loan with test case ID 3 and the expected result is, Loan application submitted The third test case to be executed is, Approve loan application with test case ID 5 and the expected result is Loan application approved. The fourth test case to be executed is, Inform user of approval with test case ID 2 and the expected result is User intimated. The last test case to be executed is Issue loan with test case ID 10 and the expected result is, Loan issued to user. All test cases when executed in this sequence help test the Issue loan to new account holder test condition. Note the test procedure does not include details on test execution process. In the next screen, we will begin with the second topic, ‘Test Development Process.’
4.8 Test Development Process
In this topic we will discuss the process of test development and understand the concept of testing quality. Let us understand the process of test development, in the next screen.
4.9 Test Development Process
Test development process include Test Analysis, Design, and Implementation. Test analysis involves defining the approach, identifying the right techniques, and the associated risks. Once the test condition is defined, it is possible to link them to their sources in the test basis and this is called traceability. Test design involves creation of test cases and test data. Note that one test case can be used to execute multiple test conditions. Test cases can be documented as described in the IEEE 829 standard. This standard is useful to learn how to document test cases. It contains test case specification identifier, test items, input specifications, output specifications, environmental needs, special procedural requirements, and inter case dependencies. You need to have a source of information about the correct behavior of the system. This source is often called an Oracle or a Test Oracle. Test implementation is about defining test procedures to group the test cases in an appropriate way. This is to execute and specify the sequential steps to be performed to run the test. Some test cases may need to be run in a specific sequence. Writing the test procedure is another opportunity to prioritize the tests. Test implementation also involves preparing test execution schedule. Let us focus on testing quality, in the next screen.
4.10 Categories of Dynamic Test Design Techniques
In this topic we will look at the types, of dynamic testing techniques, Let us understand the types of dynamic testing techniques, in the next screen.
4.11 Dynamic Testing Techniques—Types
Dynamic testing is divided into experience-based, structure-based, and behavior-based. Behavioral-based testing is further classified into functional and non-functional. Functional testing is further subdivided into equivalences, boundary value partition, cause and effect, and random state. Non-functional testing is subdivided into usability and performance. Structural-based testing is divided into data and control flow. Data flow is further subdivided into definition use and symbolic execution. Control flow is subdivided into statement, branch or decision, and branch condition. Experience-based testing is further classified into exploratory, random testing, and error guessing. Let us look at the characteristics of the three test design techniques, in the next screen.
4.12 Test Design Techniques—Characteristics
Behavior-based techniques are also referred as black box techniques. They can either be formal or informal. They can be used to test the problem specifications, and the software or its components. Test cases can be derived systematically from these models. Structure-based techniques are also defined as white box or glass box techniques. Information on the software construction is used to derive the test cases. Extent of software coverage can be measured for the existing test cases. In experience-based techniques, Tester’s knowledge and experience are used to derive the test cases. Existing knowledge about possible defects and their distribution is another source of information used to determine test cases. In the next screen, we will move on to the fourth topic, ‘Behavior-based Techniques.’
4.13 Behavior-Based Techniques
In this topic we will discuss behavior-based techniques and its different types in the following screens. Let us understand the concept of behavior-based techniques, in the next screen.
4.14 Behavior-Based Testing Techniques
Behavior-based testing is also known as specification-based, functional, data-driven, input or output driven, or requirement-based testing. It focuses mainly on system requirements or specifications. This testing is done using different combinations of input data. After entering the input, the output received from the system is verified against the expected results. Specification-based testing is also called as black box testing as it focuses on the outputs generated in response to selected inputs and execution conditions, and ignores the internal mechanism of a system or component. Based on the requirements, the tester knows what to expect from black box as an output. Let us look at the types of behavior-based techniques, in the next screen.
4.15 Behavior-Based Techniques—Types
The five types of behavior-based techniques are: Equivalence Partitioning, Boundary Value Analysis, Decision Table Testing, State Transition Testing, and Use Case Testing. Let us discuss Equivalence Partitioning, in the following screen.
4.16 Equivalence Partitioning
Equivalence partitioning is a method of deriving test cases when there is a large number of input data ranges. Equivalence partitioning helps to cut down exponentially on the number of cases required to test system. It is an attempt to get good test coverage, to find more errors with less number of test cases. In this method, classes of input conditions called equivalence classes are identified, in which each member of the class causes the same kind of processing. Therefore, it leads to the generation of the same output. The implementation of equivalence partitioning includes examining input and output; and dividing them into equivalence classes based on the behaviour. Inputs can be valid or positive, and invalid or negative. In the next screen, we will discuss the guidelines for identifying equivalence classes.
4.17 Guidelines for Identifying Equivalence Classes
Following are some general guidelines for identifying equivalence classes: If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs, which are within the valid range and two invalid equivalence class’s inputs. The invalid classes can be both lower and higher than the expected input. If a system accepts ‘date of birth’ as input and gives ‘age’ as output, the classes can be identified as follows: First valid input class is: Date is greater than zero. This will generate a valid output which is greater than zero. Second invalid class is: Date is less than zero. For this invalid input, output class will generate an error message. Hence, in this case, the valid input class is, Date of Birth greater than zero. Invalid class is, Date of Birth less than zero. Any value within the same class gives output value of age. Output value of age should be greater than zero, if class is greater than zero. If class is less than zero, then, the output value should be an error message. In the next screen, we will discuss an example of Equivalence Partitioning.
4.18 Equivalence Partitioning—Example
Let us look at an example where the requirement is Employee ID should be 6 digits. First, system inputs have to be partitioned into equivalence class by examining the possible inputs. Input is a 6-digit integer, which can be from 100,000 to 999,999. Valid and Invalid equivalence partitions for this is 100000 to 999999; and less than 100000, and greater than 999999, respectively. To have a robust test and reasonable coverage, one or two values from each equivalence class should be selected. Then, the result or output for each type of input should be determined, and test cases should be defined accordingly. In the next screen, we will focus on Boundary Value Analysis.
4.19 Boundary Value Analysis
Boundary Value Analysis or BVA is a black-box test design technique in which test cases are designed based on testing the boundaries between partitions for both valid and invalid boundaries. Boundary value is an input or output value, which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge. In the next screen, we will discuss an example of Boundary Value Analysis.
4.20 Boundary Value Analysis—Example 1
For example, if a program accepts a number in the range of -100 to +100, there would be three sets of valid equivalent partitions, the first class would be -100 to -1, being the negative range. The second class would just consist of zero. The third class would include values between 1 and 100, being the positive range. For each range, there are minimum and maximum values at each boundary. For the negative range, the lower boundary is -100 and the upper boundary is -1. For the lower boundary, the values that need to be checked includes -101, -100, -99,-2, and -1. For the upper boundary, the values that need to be checked includes -1, 0, and +1. Similarly, for the positive range, the lower boundary is 1 and the upper boundary is 100. Therefore, the values that need to be checked includes 0, 1, 2, 99, 100, and 101. In this example, note that there are some duplication in values for the test data. When the duplicate inputs are removed, then, the final input conditions are: -101, -100, -99, -1, 0, 1, 99, 100, and 101. In the next screen, we will discuss another example of Boundary Value Analysis.
4.21 Boundary Value Analysis—Example 2
Test cases, at the boundary of each identified class using equivalence partitions technique, can be chosen as given below. Valid and invalid equivalence partitions are 100000 to 999999 and < 100000, >999999, respectively. Boundary values for valid partition 100000 to 999999 are 100000, 100001, and 999999. Boundary values for the first invalid partition < 100000 are 000000 and 99999 excluding the redundant data. Boundary values for the second Invalid data range > 999999 are 999998, and 1000000. In the following screen, we will focus on Decision Table Testing.
4.22 Decision Table Testing
A decision table displays a combination of inputs and/or stimuli, which is termed as ‘causes’, with their associated outputs and/or actions, which are termed as ‘effects.’ Permutations and combinations of these inputs are used to design test cases and are called decision table. This technique is also referred as 'cause-effect' table as there is an associated logical diagramming technique called 'cause-effect graphing', which is sometimes used to help derive the decision table. Decision tables aid the systematic selection of effective test cases and can have the beneficial effect of finding problems and ambiguities in the specification. It is a technique that works well with equivalence partitioning. The combination of conditions explored may be that of equivalence partitions. Decision table testing is a black-box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli or causes shown in a decision table. Let us discuss an example of Decision Table Testing, in the next screen.
4.23 Decision Table Testing—Example
Let us look at an example of software related to a bank, which is responsible for debiting certain amount from an account. In the table given on the screen, C indicates ‘a condition’, A indicates ‘an action’, Y denotes ‘true’, N denotes ‘false’, X denotes ‘actions to be taken’, and the blank cell indicates that ‘the condition will not have any effect on action to be taken’, and ‘no action is required.’ Now, let us look at decision table with conditions C1, C2, C3, and actions A1, A2, A3 represented by rows in the table. It has five permutations represented by columns 1 to 5. Analyzing the decision table provides us the information that, action A1 should be taken when all conditions C1, C2 and C3 are true. Action A2 should be performed when both C1 and C2 are true, and C3 is false. Action A3 is performed when C1 is true and C2 is false. In this case, C3 will not have any effect on the action to be taken. This information helps to define test cases which satisfy these conditions. Test cases are also designed to check unwanted actions in the system. These test cases are called negative test cases. Let us look into a few key terms in state transition testing, in the next screen.
4.24 State Transition Testing—Key Terms
A few key terms of state transition testing are as follows: Input action that may cause a transition is called an Event. A State is a condition in which a system waits for events. Change from one state to another as a result of an event is called Transition. An operation initiated by a transition is called Action. State diagram depicts a state that a component or a system can assume and show the events or circumstances that cause the system to change from one state to another. Behavior is the sequence of messages or events that an object accepts. Object is a substance, which is in a particular state and has a behavior. In the next screen, we will understand what state transition testing is.
4.25 State Transition Testing
State transition testing is a black box test design technique in which test cases are designed to execute valid and invalid state transitions. In the first image on the screen, the switch is in ‘off’ state. An event occurs when the switch is on and the action of this event is ‘light is in on state’ as in the second image. Moving the switch from off state to on state is a transition. Here is the state transition table for this example. Consider the current state and provide an input to see the output and finish state for 3 different steps. In step 1, current state is OFF and input is Switch ON. In this case, output is Light ON and finish state is ON. In step 2, current state is ON and input is Switch OFF. In this case, output is Light OFF and finish state is OFF. In step 3, current state is OFF and input is Switch ON. In this case, output is Light ON and finish state is ON. Let us discuss an example of State Transition Testing, in the next screen.
4.26 State Transition Testing—Example
4.27 Use Case Testing—Key Terms
Interactions arranged sequentially are called Scenarios and each scenario instance is called Test Case. Collection of possible scenarios performed by the User on the system is called Use Case. Conditions to be met for the successful completion of use case is called Use Case Pre-condition. Final state of the system after the completion of use case is called Use Case Post-condition. In the next screen, we will understand what use case testing is.
4.28 Use Case Testing
A use case is a description of a particular use of the system by a user. Each use case describes the user interactions with the system, to achieve a specific task. Use case testing is a technique that helps to identify test cases that exercise the system on a transaction by transaction basis from start to finish. It helps in designing acceptance testing, and identifying integration defects and defects from common real life scenarios. Let us discuss an example of use case testing, in the next screen.
4.29 Use Case Testing—Example
For example, an online bill payment that has three main use cases. They are: Login, Online Payment, and Logout. These use cases define all the conditions in terms of requirement for the logical part of the system or application. These use cases are inter-related. Before making online payment or logging out of the application, the user needs to log into the application. Test cases are derived from use cases to represent end-to-end transactions in the system. We will focus on behavior-based techniques and its test levels, in the next screen.
4.30 Behavior-Based Techniques and Test Levels
Behavior-based techniques and its test levels are as follows: Equivalence partitioning and BVA can be applied at all levels of testing. Decision table testing is used where requirements contain logical decisions. State transition testing is generally used in embedded software and technical automation. Use case testing is used when there are requirements defined in the form of use cases and is commonly used in user acceptance testing to define real world scenarios. Usually combination of techniques are used throughout the test phases to achieve maximum coverage. In the next screen, we will discuss the fifth topic, ‘Structure-Based or White Box Techniques.’
4.31 Structure-Based Techniques
In this topic, we will understand the concept of structure-based or white box techniques, their coverage types, and examples to illustrate the concept. Let us start with the introduction to structure-based techniques, in the next screen.
4.32 Structure-Based Testing Techniques
Structure-based techniques are also called as white box techniques, glass box testing, or logic driven testing. Structure-based techniques are used to select test cases by analyzing the internal structure of a component or a system. Structure-based techniques are often used to assess the amount of testing performed by tests derived from specification-based techniques. They are then used to design additional tests with the aim of increasing the test coverage. Coverage is defined as number of items covered in testing divided by the total number of items. Coverage, therefore, defines the extent of code that has been tested out of the total code in the system. Similarly, requirements coverage mean the extent of requirements tested out of the total requirements in the system. Coverage items can be a statement, branch, condition, multiple conditions, or a component. The objective of testing is to achieve maximum code coverage as even a small amount of untested code can result in defects when system goes live. Coverage techniques measure a single dimension of a multi-dimensional concept. Two different test cases may achieve the same coverage. However, the input data of one test case may be able to find an error, and the input data of the another test might not be able to find the error. In the next screen, we will discuss the types of coverage in structure-based testing techniques.
4.33 Structure-Based Testing Techniques and Coverage Types
Statement Coverage is, number of statements executed divided by total number of statements. Branch Coverage is, number of branches covered divided by total number of branches. Decision Coverage is, number of decision outcomes achieved divided by total number of decision outcomes. The goal of testing should be to ensure all branches and statements have been tested at least once. Though these techniques are mostly used at a component level, these can also be applied at integration and system testing levels. Let us discuss an example of structure-based testing techniques, in the next screen.
4.34 Structure-Based Testing Techniques—Example 1
Look at the piece of code on your screen and understand the percentage of test coverage using three techniques which we have just now discussed. Consider the input value of ave bal as 4500. See how many statements, decisions and branches this will cover. Let us continue to discuss an example of structure-based testing techniques, in the next screen.
4.35 Structure-Based Testing Techniques—Example 1 (contd.)
The input value for ave bal equals 4500 which covers 33% of statements, 50% of decisions, and 33% of branches. Let us consider another test case with input value for ave bal as 55000 which covers 88% of statement coverage, 50% of decision coverage and 66% of branch coverage. Let us discuss another example of structure-based testing techniques, in the next screen.
4.36 Structure-Based Testing Techniques—Example 2
Consider the example on screen of a small application which decides whether an applicant is eligible to vote or not. Different inputs to the program, execute different decisions and branches. Invalid input of 0, fails the condition that age is greater than or equal to 1. Hence, an error message is displayed and the user is prompted to enter a valid age. The second input of 12, passes the first condition that age is greater than or equal to 1, but fails the second condition that age is greater than or equal to 18. Hence, a message is displayed to the user as ‘You are not eligible to vote.’ The third input of 21, passes the first and the second condition. Hence, the user is directed to the Voters site. At this stage, the system has a condition to check whether the user has completed the input correctly or not. It is the input of 21, along with the completed input to the system in the last stage that accomplishes the goal of the system. Different inputs execute different pieces of the code. Hence, Testers should build test cases using different input classes to ensure maximum statement, decision, and branch coverage. In the following screen, we will discuss ‘Other Structural Techniques.’
4.37 Other Structural techniques
Besides control flow there is another category called data flow with symbolic execution and definition use techniques under it. Data Flow is a testing technique based on definition and value assigned to variables in the program. In the next screen, we will discuss the sixth topic, ‘Experience-Based Techniques.’
4.38 Experience-Based Techniques
In this topic, we will understand the types of experience-based technique. We will compare this technique with others and find out how to choose other techniques. Let us start with the types of experience-based techniques, in the next screen.
4.39 Experience-Based Testing Techniques—Types
The different types of experience-based testing techniques and their uses are as follows: In Exploratory Testing, tests are derived based on Tester’s experience and intuition. It is a hands-on approach in which Testers are involved in minimum planning and maximum test execution. The test design and test execution activities are performed simultaneously and most of the times with no formal or limited documentation, test conditions, cases, or scripts. For example, the tester decides to use BVA and tests the most important boundary values without writing them. Some notes will be written during the Exploratory Testing session, for the later production of a report. Error guessing is a test design technique where the experience of the Tester is used to identify defects in the component or system. It is a widely practiced technique. The Tester is encouraged to think of situations in which the software may not be able to cope. For example, division by zero, blank input, empty files, and incorrect data. In Random testing, Tester chooses already defined test cases and executes them. Tester takes the components randomly and starts testing its functionality. Random testing is also known as Monkey Testing as the application is not tested in sequence. It can also be categorized as black box testing technique. Let us discuss the differences between Experience-Based and Other techniques, in the next screen.
4.40 Experience Based Techniques vs. Other techniques
Other techniques use formal process of creation of test design, test execution, and defect logging, However, experience-based techniques use more informal process where the process is dependent more on the skill and experience of the tester. Other techniques need more effort during Test Design and Test Execution phase. However, Experience-based techniques need less effort. Other techniques are very effective when requirements are clearly documented and understood. Level of coverage provided by these techniques also improves effectiveness. Experience-based techniques are more effective when requirements are not clearly documented or understood. Coverage is not measured in this case; hence, completeness of testing cannot be measured accurately We will focus on Choosing Test Techniques, in the next screen.
4.41 Choosing Test Techniques
Choosing the right technique is crucial for developing good quality tests. For example, one of the benefits of structure-based techniques is that, they can find undesired components in the code, such as 'Trojan horses' or other malicious code. However, if there are parts of missing specification from the code, only specification-based techniques can find them. If there are factors missing from the specification and from the code, then only experience-based techniques can find them. Each individual technique is aimed at particular types of defect. Some techniques are applicable to specific situations and test levels. As each testing technique is customized at finding one specific class of defect, using just one technique ensures that many defects of a particular class are found. However, using variety of techniques ensures that a variety of defects are found, resulting in more effective testing. Choosing the appropriate testing techniques is based on a number of factors such as development life cycle, use case models, type of system, level and type of risk, test objective, time and budget, and Tester’s experience on type of defects found in similar systems. With this, we have reached the end of the lesson. Let us now check your understanding of the topics covered.
A few questions will be presented in the following screens. Select the correct option and click Submit to see the feedback.
Here is a quick recap of what we have learned in this lesson: Test development process includes test analysis, test design, and test implementation. Dynamic testing is divided into experience-based, structure-based, and behavior-based. Behavior-based techniques focuses mainly on system requirements or specifications, and are called black box techniques. Structure-based techniques are used to select test cases by analyzing the internal structure of a component or a system, and are called white box techniques. The different types of experience-based techniques are: exploratory testing, error guessing, and random testing.
This concludes ‘Test Design Techniques.’ The next lesson is ‘Test Management.’
About the On-Demand Webinar
About the Webinar