Tools Support for Testing: CTFL Tutorial
6.1 Tools Support for Testing
Hello and welcome to the Certified Tester Foundation Level (CTFL®) course offered by Simplilearn. This is the sixth and last lesson of the course, where we will discuss Tool Support for Testing. Let us look at the course map, in the next screen.
6.2 Course Map
Lesson six is divided into three topics. They are: Types of Testing Tools, Effective Use of Tools–Potential Benefits and Risks, and Introducing a Tool into an Organization. Let us begin with the objectives, in the next screen.
After completing this lesson, you will be able to: Explain the purposes of tool support; List the types of testing tools; Identify the potential benefits and risks of testing tools; and Describe the steps involved in introducing a tool into an organization. In the next screen, we will begin with the first topic, ‘Types of Testing Tools.’
6.4 Types of Testing Tools
In this topic, we will discuss the purposes of tool support, the different test activities that are supported by tools, and the types of testing tools. Let us discuss the purposes of tool support, in the following screen.
6.5 Purposes of Tool Support
A few purposes of tool support are as follows: Improve the efficiency of test activities, as automated functions have less probability of error than manual execution. Automate activities that require huge manual resources by increasing the processing speed. Automate the manually un-executable tests. Increase the reliability of testing through automation, as it is possible to achieve more coverage of tests. It is very important to consider the possible opportunities for automation within test projects. Automation needs to be carefully planned and executed using the right tools. We will look at tool support for testing, in the next screen.
6.6 Tool Support for Testing
Tools used to support test activities are referred as Test Tools. Generally, tools are designed to address one or more test activities such as test case generation, execution, monitoring, and analysis. The two common terms of tools support for testing are: CAST and CASE. Tools used for test activities in software life cycle are called Computer Aided Software Testing or CAST Tools. An example of CAST Tool is Quick Test Professional for automated functional testing. Tools used for software development process are called Computer Aided Software Engineering or CASE Tools. An example of CASE tool is CA ERwin Data Modeler (ERwin), which provides a simple visual interface, to design and manage complex data environment. Let us discuss testing tools, in the next screen.
6.7 Testing Tools
Tools are classified based on the testing activities or supported areas. For example, tools that support management activities or static testing. A few tools perform a specific and limited function called a 'point solution.' Many commercial tools provide support for different functions. For example, a test management tool may provide support for testing, which includes progress monitoring, configuration management of test ware, incident management, requirements management, and traceability. A few tools can provide both coverage measurement and test design support. There are tools which are used: For testing. For example, Quick Test Professional to carry out functional testing To support or help in testing. For example, Data Maker Tool creates high quality test data, which can be used for test execution. To support the testing process. For example, Quality Center. To explore an application. For example, Text Explorer. Let us discuss the classification of testing tools, in the next screen.
6.8 Testing Tools—Classification
Testing is supported by multiple tools. Some of the common tools are as follows: Requirements testing, Static analysis, Test design, Test data preparation, Test running—character-based, Graphical User Interface or GUI, Comparison, Test harnesses and drivers, Performance test, Dynamic analysis, Debugging, Test management, and Coverage measurement. Let us discuss how these tools fit into V-model, in the next screen.
6.9 Testing Tools in the V-Model
As seen in the image on the screen, requirement testing tools are used for requirement analysis, functional design, technical design, and coding activities. These tools often help in capturing requirements using intuitive models. Once requirements are recorded, the tool can generate high level architecture design and code. The in-built logic of the tool helps in identifying gaps in the requirements. Test design and test data preparation tools can be used once requirements are finalized. These tools can generate automatic test cases from requirements and support automatic creation of test data. Static analysis tools become useful during design and coding phases by identifying gaps in the code. Coverage measure tools and debugging tools can be used during component testing. Test harness and drivers, and dynamic analysis tools are used during integration testing. Test run and comparison tools are used in all levels of testing to compare results. Performance measurement tools are used in system and acceptance testing to measure the system performance.
6.10 Tools Support for Management of Testing
Tools that support Test Management activities are: Features provided by test management tools include managing tests, scheduling, logging test results, traceability support, interface with test execution, incident management, requirement management, configuration management, status report generation, and version control. Examples include HP Quality Centre and IBM Quality Manager. Features provided by requirements management tool include storing requirement statements, generating unique identifiers, checking consistency of requirements, prioritizing requirements for testing purposes, and managing traceability through levels of requirements. Examples include Analyst Pro, CaliberRM, and WinA&D. Features of configuration management tools include storing information about versions and builds of the software and testware, traceability between software and testware, release management, baselining, and access control. Examples include Visual Source Safe, and Clear Case. Incident management tool is also known as a defect-tracking, a defect-management, a bug-tracking, or a bug-management tool. Incident management tools make it much easier to keep track of the incidents over time. Characteristics of incident management tools include incident and logging support, change request, providing incident or anomaly report, and other incident management reports. Examples include Star Team, and Bugzilla.
6.11 Tools Support for Static Testing
Tools that support for Static Testing are as follows: Review tools provide support to the review process. Typical features include plan and process review, store review comments and communicate with relevant people, track review comments, collect metrics, and report key factors. Static analysis tools are generally used by developers as a part of the development process, and for component testing. The key aspect of these tools is that the code is not executed or run. Features of static analysis tools include support to calculate metrics such as cyclometric complexity or nesting levels, enforce coding standards, analyze structures and dependencies, aid in code understanding, and identify anomalies or defects in the code. Examples include Codesonar and Klocwork Suite for Java. Modeling tools support the validation of software or system models. Modeling tools are typically used by developers. An advantage of both modeling and static analysis tools is that they can be used before dynamic tests. Features include support for identifying inconsistencies and defects within the model, helping to identify and prioritize testing areas of the model, predicting system response and behavior under various situations, helping to understand system functions and identifying test conditions using a modeling language such as Unified Modeling Language or UML. Example includes Rational Suite for EXtensible Markup Language or XML. Let us focus on tool support for test specification, in the next screen.
6.12 Tool Support for Test Specification
The tools that support test specification include test design and test specification tools. Test design tool supports the test design activity by generating test inputs, from a specification that may be held in a CASE tool repository. For example, if the requirements are kept in a requirements management or test management tool, or in a CASE tool used by developers, then it is possible to identify the input fields, including the range of valid values. If the valid range is stored, the tool can distinguish between values that accepts and generates an error message. If the error messages are stored, then the expected result can be checked. If the expected result is known, then it can be included in the test case. Other type of test design tool helps to select combinations of possible factors to be used in testing. Some of the tools may use orthogonal arrays and can easily identify the tests that exercises all the elements such as input fields, buttons, and branches. Test data preparation tools enable data to be selected from an existing database; or created, generated, manipulated, and edited for use in tests. The most sophisticated tools can deal with a range of files and database formats.
6.13 Tool Support for Test Specification—Characteristics
Following are the characteristics of test design and test data preparation: Test design generates the test inputs or the tests from requirements, GUI, and design model; Might generate expected outcomes; Might generate test frames to create tests or test stubs to accelerate the testing process; Examples include Caliber-RBT, Smart Testing, and Blueprint RC. Test data preparation generates test data using scripts and by manipulating existing data; Creates test data as per the rules set; Examples include File-AID, and GridTools. In the following screen, we will discuss tool support for test execution and logging.
6.14 Tool Support for Test Execution and Logging
Test execution tools use a scripting language, which is a programming language to drive the tool. Therefore, any tester who wishes to use a test execution tool needs to use programming skills, to create and modify the scripts. The advantage of programmable scripting is that, tests can repeat actions for different data values; take different routes depending on the outcome of a test; and can be called from other scripts giving some structure to the set of tests. Test harness is a test environment comprised of stubs and drivers needed to execute a test. Unit test framework tool provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides support for the developer, such as debugging capabilities. The framework or the stubs, and drivers supply information needed by the tested software. For example, an input from a user. They also receive information sent by the software. For example, a value to be displayed on a screen. Stubs can be referred as 'mock objects'. In the following screen, we will look at the characteristics of tools that support test execution and logging.
6.15 Tool Support for Test Execution and Logging—Characteristics
Following are the characteristics of test execution: Enables test to be run automatically or semi-automatically; Uses static or stored inputs and expects outcomes for comparing the results; A few tools support record and replay facility; Logs defects automatically. Examples include Quick Test Professional, Rational Robot, Silk Test, and Load runner. Characteristics of test harnesses and unit test framework tools include support for testing the application on the whole or at components level; and testing of components through stubs even when the entire program is not available for testing. In the next screen, we will understand is the concept of test comparator.
6.16 Test Comparator
Test comparator is a test tool used to perform automated test comparison. Test comparison is the process of identifying differences between the actual results produced by the test component or system, and the expected test results. There are two ways in which actual test results can be compared with the expected test results. It can be performed during test execution, which is called dynamic comparison; or after test execution. The other way is post-execution comparison, where the comparison is performed after the test has finished executing and the software under test is no longer running. Features of test comparator includes the following: It compares files, data and test results. Test comparators are built in most of the test tools. However, a separate tool is required for results comparison. In the next screen, we will focus on coverage measurement tools.
6.17 Coverage Measurement Tools
Coverage tool provides objective measures of structural elements. For example, statements, decisions, or branches exercised by a test suite. Characteristics of coverage measurement tools include support for measuring the code coverage while executing the test cases. For example, Adatest to measure the code coverage of Ada code. These tools can be intrusive or non-intrusive. In the next screen, we will discuss the uses and features of security testing.
6.18 Security Testing
Security Testing includes a set of techniques that are used to check for the security breaches such as: Identifying computer viruses; detecting intrusions such as denial of service attacks; simulating various types of external attacks; probing for open ports or other externally visible points of attack; identifying weaknesses in password files and passwords; and security checks during operation. An example of the security testing tool is IBM Appscan. The image on the screen depicts the way security testing can be planned and carried out against typical project test phases. Security requirements need to think about scenarios of past security breaches and plan for the same during the requirement phase. Security test planning should be prepared during the design phase. If applicable, security automation plan should be made a part of test plan. Test environment setup to perform security testing starts at test planning stage, and completes when the coding starts. Security testing can be started during coding phase and can continue till the system is moved to production, and sometimes after. In the next screen, we will focus on the tools that support dynamic analysis.
6.19 Tool Support for Dynamic Analysis
Dynamic analysis tool provides runtime information on the state of the software code. The information provided includes allocation, use, and de-allocation of resources, and flag of unassigned pointers or pointer arithmetic faults. Dynamic analysis tool also identifies the defects only when software is on run or is executed. This tool is also used for component testing or component integration testing. For example, Bounds Checker which looks for memory leaks. In the following screen, we will focus on the tools that support performance and monitoring.
6.20 Tool Support for Performance and Monitoring
Features or characteristics of performance-testing tools include supporting load generation on the tested system; measuring the timing of specific transactions as the load on the system varies; measuring average response times; producing graphs or charts of responses over time. For example, Load Runner and Silk Performer. Monitoring tools are used to verify, analyze and report the behavior of the system resources. Features or characteristics of monitoring tools include support for identifying problems and sending an alert message to the administrator, such as network administrator; logging real-time and historical information; finding optimal settings; monitoring the number of users on a network; monitoring network traffic, which can be done either in real time or covering a given duration of operation with the analysis performed afterwards.
6.21 Tools for Usability Issues and Data Quality Assessment
Following are the tools used for usability issues and data quality assessment: Usability Testing Tools help in assessing the ease of use of applications from the point of view of end users. For example, xSort for web usability testing. Data quality assessment tools help in assessing data quality, reviewing and verifying data conversion process, and verifying migration rules. In the next screen, we will discuss the second topic ‘Effective Use of Tools—Potential Benefits and Risks.’
6.22 Effective Use of Tools—Potential Benefits and Risks
In this topic, we will discuss the potential benefits and risks of the tools. We will also focus on how the tools can be used effectively. Let us begin with the potential benefits, in the next screen.
6.23 Potential Benefits
Tools, when carefully analyzed and applied in right context, help improve test productivity dramatically. There are many benefits of using tools to support testing. A few common benefits across tools include reduce repetitive tasks; achieve high consistency and repeatability; provide objective assessment; and easy access to test information. Let us look at each of these benefits in detail, in the following screens. We will begin with reduce repetitive work, in the next screen.
6.24 Benefits—Reduce repetitive work
Repetitive work, when manually performed can be tedious and they can be handled by tools more efficiently. Following are a few examples of repetitive tasks, and tools that can handle them more efficiently. Static analysis tools for verifying coding standards; test data preparation tools for creating huge test data; and test execution tools for regression test and keying in same test data. Let us focus on high consistency and repeatability, in the next screen.
6.25 Benefits—High Consistency and Repeatability
Manual testing is dependent on the style and nature of the individual performing the test. Hence, it differs from person to person. Tools remove this variation as they can only perform the task they are programmed for. Following are few examples where repetitive work can be performed by testing tools with high consistency. Debugging and test execution tools for retesting, test execution tools for entering test data, and test design tools for creating test cases. In the following screen, we will look at objective assessment.
6.26 Benefits—Objective Assessment
Subjective prejudices of people can often lead to defects being ignored. Such prejudices can be eliminated as test tools do not have artificial intelligence. Following are few examples where tools can be used effectively in objective assessment. Traceability tools for test coverage; monitor tools for system behavior, and test management tools such as quality control to capture incident information. In the next screen, we will discuss access to information.
6.27 Benefits—Access to information
Large amount of data doesn't confirm the communication of information. Human brain can easily register and interpret visual information. For example, a chart or a graph is a better way to demonstrate information than a long list of numbers, which is the main reason why charts and graphs in spreadsheets are useful. Special purpose tools give visual output for the information they process. Following are few examples where tools can be used for presenting the data in an easily comprehensible manner. Defect and test management tools for defect information; Test execution and test management tools for test progress; and test performance tools for application performance. In the next screen, we will understand the potential risks of using testing tools.
6.28 Potential Risks
Although there are significant benefits that can be achieved by using tools to support testing activities, there are many organizations that have not achieved the benefits they expected. A few potential risks of tools are as follows: Unrealistic expectations from the tool. Underestimation of the tool. For example, time, cost and efforts required to introduce a tool, time and efforts needed to achieve significant and continuous benefits, resources and efforts required to maintain the test assets generated by tool. The other risks are over reliance on the tool; risks from tool vendor such as vendor moving out of business, selling the tool to different vendor, retiring the tool, or poor service; compatibility issues with other tools including requirement management, version control tools, defect management tools, and test management tools. We will focus on special consideration for some type of tools, in the next screen.
6.29 Special Consideration for Some Tools
For each type of tool, a few aspects need to be considered to ensure successful implementation. While implementing test execution tools, consider the test script creation effort; recording unstable screens; testers with no scripting experience in data-driven and keyboard-driven approach; and expected results in scripts. When using performance testing tools ensure coding Standards are followed; follow a step by step approach to rectify existing code per the standards. Before considering or selecting a Test Management Tool, check if it is compatible with other tools to get all the benefits it promises and also designs and generates test reports for which it can be used. Let us understand the effective use of tools with an example in the next screen.
6.30 Effective Use of Tools—Example
ABC corp invested $1 million towards purchasing a new tool for Test Execution Process. After this substantial investment, the Management team decided to dismiss 50% of its testing team. They thought that the tool would be able to compensate the effort. The remaining team had limited experience in using the tool and hence struggled to use it for their projects. The learning curve took them longer than usual to conduct their tests. At the same time, they were burdened with additional work due to the dismissal of 50% of team members. The team failed to implement the tool and handle the existing test load in the organization. The Management team blamed the tool for the failure and it became shelf ware in the organization. They had to rehire more resources to ensure the backlog was cleared. In the next screen, we will begin with third topic ‘Introducing a Tool into an Organization.’
6.31 Introducing a Tool into an Organization
In this topic, we will discuss tool selection process, factors in selecting a tool, tool implementation process, and success factors for deploying a tool. Let us begin with tool selection process, in the next screen.
6.32 Tool Selection Process
Introducing any new tool into an organization involves two processes. They are: Selection and Implementation. Selection process involves: including a business case for tool requirement by defining problem without tool or need for the tool, tool support as a solution, and identifying the constraints of a tool. Evaluation in terms of tool selection, vendor selection, arrange for demonstration, evaluate selected tool, review and select tool. Decision in terms of deciding on tool. In the following screen, we will look at the factors in selecting a tool.
6.33 Factors in Selecting a Tool
A few common factors to be considered while selecting a tool includes the following: Assessment of the organization's maturity. For example, readiness for change; Evaluation of tools against clear requirements and objective criteria; Prioritize requirements; Conduct proof of concept to check whether the product works as desired and meets the requirements and objectives; Evaluation of the vendor in terms of reliability, support, and other commercial aspects or open-source network of support; Identifying and planning internal implementation including training and mentoring for new users; Ease of use and installation of the tool. Compatibility with other tools that are already a part of the organization; and Cost of lease or purchase of the tool. In the next screen, we will understand the process of tool implementation.
6.34 Tool Implementation Process
Tool implementation process includes the following: Get management commitment for required support on the decided tool. Introduce to the team mentioning the need for the tool and how it addressed those needs; Pilot the tool and evaluate tool based on pilot findings, Move on to phase wise implementation and Review the implementation regularly. The pilot project should experiment with different ways of using the tool. For example, different settings for a static analysis tool, different reports from a test management tool, different scripting, and comparison techniques for a test execution tool or different load profiles for a performance-testing tool. Before implementing any tool on a large scale, it should be put through pilot implementation. The objectives for conducting a pilot while introducing a new tool in the organization, are to understand tool features and limitation in more detail. Updates required to the existing process to implement the tool. Define a new process to maintain the tool if required, and most importantly look at the return on investment. Evaluate the pilot project against its objectives. In the next screen, we will discuss the success factors for deploying a tool.
6.35 Success Factors for Deploying a Tool
A few factors to be considered for deploying a tool are: adopt phased implementation; ensure process fits well to use the tool; define guidelines and train new users as required; monitor the tool benefits; and capture lessons learned and constantly improve. Let us understand the introduction of a tool with the help of an example in the next screen.
6.36 Introducing the Tool—Example
ABC corp invested $1 million towards purchasing a new tool for Test Management Process. After this substantial investment, it did not want to waste time before the tool was released to all teams within the organization. The teams were also excited about learning something new and hence quickly accepted the implementation. However, as the teams started to use the tool, each team realized that the tool did not match the project process. Hence, they requested for tool workflow customizations. With multiple requests coming in from teams, the tool support team was unable to define a consistent workflow. Over a period of time, the main workflow was not followed by any team and each team had its own exception flow being followed. The maintenance of the tool became difficult and eventually the tool had to be removed from the organization. This example shows the importance of conducting a pilot for any new implementation, to ensure a tool is scalable to the needs of the larger organization. With this, we have reached the end of the lesson. Let us now check your understanding of the topics covered in this lesson.
A few questions will be presented in the following screens. Select the correct option and click submit to see the feedback.
Here is a quick recap of what we have learned in this lesson: Purposes of tool support include improving the efficiency of test activities, automating activities that require huge manual resources, and increasing the reliability of testing. The list of tools to aid testing are requirements testing , static analysis, test design, and test data preparation tools. Benefits of tools include achieving high consistency and repeatability, providing objective assessment, and easy access to test information. The two steps involved in introducing a tool in an organization are: selection and implementation process.
This concludes ‘Tool support for Testing.’ With this, we have come to the end of this course. Thank You and Happy Learning!
About the On-Demand Webinar
About the Webinar