SCRUM @ High Level - Certified ScrumMaster® (CSM)

5.1 Test Management

Hello and welcome to the Certified Tester Foundation Level (CTFL®) course offered by Simplilearn. Test management will be discussed in this fifth lesson, which includes the importance of independent testing, its advantages and disadvantages within an organization, and different roles and responsibilities within a test team. We will also examine test plans, estimates, monitoring and control, configuration management, and risks. Let us look at the course map in the next screen.

5.2 Course Map

Lesson five is divided into six topics. They are: Test Organization, Test Planning and Estimation, Test Progress Monitoring and Control, Configuration Management, Risk and Testing, and Incident Management. Let us discuss the objectives of this lesson in the next screen.

5.3 Objectives

After completing this lesson, you will be able to: Explain the concept of Test Organization, Identify the factors to be considered for a Test Plan, Describe the process of Test Progress Monitoring, Explain Configuration Management, List the different aspects of Risk and Testing, and Explain Incident Management. In the next screen we will begin with the first topic, ‘Test Organization.’

5.4 Test Organization

In the next few screens we will discuss the advantages and disadvantages of independent testing, understand the concept of independent test organization, and different roles and their responsibilities in testing. Let us find out the advantages and disadvantages of independent testing in the next screen.

5.5 Advantages and Disadvantages of Independent Testing

Some of the main advantages are as follows: Test reports are more reliable and credible due to the absence of commercial bias. The testing turnaround time is shorter as independent test organizations are well-trained and specialized in testing activities. Also they have specialized facilities with latest calibration that will increase test cycles productivity. The disadvantages are that contract testing is costly, and independent testing organizations face difficulties in understanding the product unless they are experienced at it. This lack of understanding is due to their absence during product development. We will continue our discussion on the advantages and disadvantages of independent testing in the following screen.

5.6 Advantages and Disadvantages of Independent Testing (contd.)

Other advantages of independent testing are as follows: An independent tester tests the application with a clear understanding of each requirement without any assumptions. They often acquire excellent experience, credentials, certifications, and accreditations due to this process. Testing at a third party or other organization ensures that the software will work as intended even outside the developed environment. Disadvantages of Independent Testing are that, since independent organizations might not have communication with the development team, they are not able to discuss and clarify product items. This can often lead to communication gaps between the teams and ultimately impact software quality. As another organization is responsible for the quality testing, the developer may lose the sense of responsibility towards quality. Another common issue with independent test organization is that independent testers may be seen as a bottleneck or blamed for delays in release. Let us now look at an independent test organization structure in the next screen.

5.7 Independent Test Organization

An independent test organization can be a person or another organization that handles the testing activities for product, material, or software; on agreed terms with the producer or the owner. An organization is independent if it is not affiliated with the producer or the user of the tested item. An independent test team looks for problems that are difficult for the development team to find. The test report is generated with no favoritism and based on the quality of software. The levels of testing has already been covered in lesson 2.

5.8 Roles in Testing

Let us now look at different roles in testing. There are many roles in each testing organization like Primary Tester, Secondary Tester, Subject Matter Expert, Functional Test Analyst, Test lead, and Test Manager. However, the common roles in the test team are Test Lead and Tester. The responsibilities of Test Lead and Tester vary from organization to organization and based on the nature of test project. The role of the Test Leader is to effectively lead a testing team. A separate Test Lead is assigned if the project is complex and huge, otherwise this role can also be performed by Project, Development, Quality Assurance, or Test Group Manager. A Tester analyzes designs, and executes manual or automation tests based on the risk of the project and product. The roles and responsibilities are decided and agreed before the testing starts for the project. Let us look at the responsibilities of a Test Lead in the following screen.

5.9 Responsibilities of Test Lead

The main responsibility of a Test Lead is to lead the team efficiently for achieving the agreed quality of the project. In many organizations, Test Leads are also called Test Managers or Test Coordinators. Test Lead has a number of tasks spread across different test phases. They are test planning, team management, test infrastructure, test execution, risk management, and client management. These responsibilities are discussed individually. In test planning Test Lead should: Understand the testing effort by project requirements analysis; Estimate and obtain management support for the testing time, resources, and budget; Organize the testing kick-off meeting; Define the test strategy, and develop the test plan for the tasks; Monitor dependencies and identify areas to mitigate the risks to system quality; and Obtain stakeholder support for the plan.

5.10 Responsibilities of Tester

As a part of team management, Team Lead should: Build a testing team of professionals with appropriate skills, attitudes, and motivation; Identify both technical and soft skills training requirements and forward it to the Project Manager; Assign task to all testing team members and ensure they have sufficient work in the project; and Act as the single point of contact between the Development team and the Testers. Team lead should arrange the hardware and software requirement for the test setup as a part of building test infrastructure. Test Execution is the main phase where actual defects are identified. A Test Lead should: Ensure content and structure of all testing documents or artifacts are documented and maintained; Document, implement, monitor, and enforce all processes for testing as per organization defined standards; Review reports prepared by test engineers; Ensure the timely delivery of different testing milestones; Check or review the test case documents; and Keep a track of the new or changed project requirements. Another important activity to be performed is risk management where the Test Lead needs to: Escalate the project requirement issues such as software, hardware, and resource; to Project Manager or Senior Test Manager as required; Prepare and update the metrics dashboard at the completion of project and share it with stakeholders; Track and prepare the report of testing activities like results, case coverage, required resources, defects discovered, and performance baselines. Another important activity to be performed is client management especially when testing is carried out by independent organization. Here, the Test Lead needs to Organize the status meetings and send daily status reports to the client; Attend client meetings regularly and discuss the weekly status with client; and Communicate with the clients, which is a necessary task for the Test Lead. Let us discuss the responsibilities of a Tester in the next screen. While Test Lead designs the test strategy and test plans, a Tester has the responsibility to implement those test plans and design low level test plans, scenarios, and cases. The main responsibilities for a tester are Test Planning, Test Execution and Test Reporting phases. Under test planning phase, the Tester has to analyze client requirements, understand the tested software application, and give inputs to the test plan and test strategy documents. Based on the requirements the Tester has to prepare test cases for module, integration, and system testing and prepare test data for each test case developed. Tester is also involved in preparing test environment and analyzing test and test cases prepared by other Testers. Tester also needs to write the necessary test scripts. After the test planning activity, Tester has to execute all the test cases once the code is migrated to test environment. In the process of executing test cases if any mismatches are found between actual and expected results, defects are logged and tracked until the variance reduces. Once the defect is fixed Tester has to perform necessary retesting of the functionality and close the defect if the issue is resolved. In the Test Reporting phases, a Tester needs to provide data for test reporting such as defect information, report summaries, and lesson learnt documents. A tester also has to conduct review meetings within the team. In the next screen we will discuss the second topic, ‘Test Planning and Estimation.’

5.11 Test Planning and Estimation

In this topic we will find out how to prepare a test plan; examine issues related to project planning, test level, or phase; and a specific test type. We will also identify the factors that influence the test planning effort. Let us look at test planning in the next screen.

5.12 Test Planning

Test plan document is one of the main Testing documents. It describes all the activities to be carried out under testing and their milestone dates. Test plan serves as the testing project plan. Test planning includes planning for scope, testing approach, and resources and schedule. It also includes planning for the risks including contingency planning and for the roles and responsibilities in the intended testing activities. Test plan creation starts at the initial or requirements gathering stage of project, and continues throughout the SDLC process. Test plan is a live document, which has to be updated and maintained as the project evolves. It provides a road map to the testing project. As testing progresses through various phases, feedback from different stakeholders and various project risks need to be considered to make final changes to the test plan. In the next screen, we will identify the factors to be considered while preparing a good test plan.

5.13 Test Plan—Factors

A good test plan is the keystone of a successful testing implementation. The factors to be considered while building a test plan are as follows: Test policy of organization—the approach for testing a project differs from one organization to another. Each organization follows standard processes for common tasks. The organization test policy is the first factor to keep in mind while preparing a test plan. Scope of testing—this section defines the boundaries of the project and helps teams focus on the direction of testing. The testing team describes the specific requirements to be tested. This sets the basis of project estimations and results in building the schedule and resource plan for the project. Objectives of testing—objectives describe the requirements and goals of testing, and vary from project to project. The objectives are essential to validate specific set of requirements, performance of the system, or other objectives. Let us continue this discussion in the next screen.

5.14 Test Plan—Factors (contd.)

Other factors to be considered while building a test plan are as follows: Project risks—the project risks play a vital role in determining the approach for testing. Strategic decisions is risk dependent. These risks need to be constantly evaluated and re-planned to ensure no risk becomes an issue. Testability of requirements—if any requirement is not testable or partially testable, testing needs to be carefully planned to ensure the risk of such requirements is minimized. In this situation, test results provide information on the state of the software and risk levels. These requirements should be identified and planned well before the testing starts. Availability of resources—availability of the right set of resources plays a significant role in determining the process of project execution. Required resources need to be identified for testing. Bridging the gap through various means like acquiring new resources, training existing resources, or changing the plan should be done depending on requirement and availability. Project resources are not only people they can also be hardware, infrastructure, and even software resources.

5.15 Test Planning Activities

The different activities performed for completing test planning are as follows: Scope is one of the important factors in any plan. The scope, objective, and risks associated with testing need to be defined to explain a robust test plan. The purpose of risk analysis during software testing is to identify high-risk application components, and to identify error-prone components within specific applications. The result of the analysis can be used to determine the testing objectives. The test plan should contain detailed approach for all tasks and it also specify the levels, types, and methods; whether the test is manual, automation, white box, or black box testing. Depending on the project size, the roles and responsibilities should be clearly defined. Each role will have different responsibilities based on their assigned tasks. The schedules for phases such as test analysis, design activities should be planned early. Test implementation, execution, and evaluation of test results, should be defined clearly. All templates needed for test documentation should be identified and made available for the resources. Also, the metrics for tracking the project status should be explained clearly. This will help in monitoring test preparation, execution, defect resolution, and controlling risk and issues.

5.16 Contents of Test Plan

Let us discuss the contents of a test plan as defined by IEEE std. 829- 1998. These standards are widely adopted in the testing industry and cover critical sections to be included in Test Plans. Test plan identifier is a unique number generated by the testing organization to identify the test plan. A test plan is dynamic in nature and variable in format, so versions need to be always maintained while creating a test plan. All reference sources in the test plan should be cited in the test plan. These are included in the Reference section. Introduction gives a brief summary about the software under test and its high level functionality. Test items are areas to be tested within the scope of the test plan. This includes details about the test. It also contains a list of features to be tested and not to be tested. Software risk issues list all risk areas in the software such as complex functional area, government rules and regulations, lack of right resources, and changing requirements. Features to be tested includes a list of features to be tested as a part of this test phase. It is important for test effort to focus only on specific test areas in each test cycle. Similar to features to be tested, Features Not to be tested include list of features that should not be tested in a test cycle. These features are considered out of scope for that test phase and might be planned to be tested in other test phases. Approach or Strategy section describes the overall test strategy for the plan. Item pass or fail criteria section describes the criteria for passing a test condition. This is a critical aspect of any test plan and should be appropriate to the level of the plan. Suspension criteria and resumption requirements clearly document when to stop testing and when it can be resumed. Let us continue the discussion on the contents of test plan in the following screen.

5.17 Contents of Test Plan(contd.)

The other contents of a test plan as defined by IEEE std. 829-1998 are as follows: Test deliverable section of test plan describes the deliverables, such as test plan, test schedule, test cases, error logs, and execution logs as a part of this testing activity. Remaining Test Tasks include a list of left over tasks after the testing has been completed. These include tasks which are usually passed on to the next phase of testing. Environmental needs lists all environmental needs such as software, hardware, and any other tools for testing along with their versions if required. Staffing and training needs section helps to identify resource and their skill levels. If the resource needs any training on the application or on any other tool, those trainings should be documented in this section. Responsibilities section covers the roles and their accountabilities. For example, if resource is a test manager, then the responsibilities of that role is listed in this section. Schedule all testing tasks based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan, which includes testing, will suffer. Planning Risks and Contingencies is a critical part of any planning effort. It helps the Test Manager to identify project risks and plan for the contingencies for the same. All the approvers for this test plan should be mentioned here with their roles in the project. Glossary contains terms and acronyms used in the document, and testing in general. This section provides information to avoid confusion and promote consistency in all communications. In the next screen, we will discuss the execution of test schedule.

5.18 Test Execution Schedule

The following factors should be considered while building a Test Execution Schedule: Technical dependencies such as, availability of hardware, software, environment for executing of the tests. Logical dependencies such as, specific test cases should pass before executing other relevant tests. Priority of test cases is an important factor that will determine the priority of test execution. Project risks also play an important role in determining the project execution schedule. For example, if there is a resource risk for execution of specific types of tests after a particular date, the Test Manager may schedule these tests earlier in the test cycle. A simplified sample execution schedule is given on the screen. In the next screen we will look at other sections.

5.19 Entry Criteria

Entry Criteria define the prerequisite to be achieved before starting the testing activity. An entry criterion is a prerequisite before the testing activity starts. The main focus is to check whether a tester can perform the testing tasks on the software without major obstacles. The main areas to look at while defining entry criteria are as follows: Testing environment setup and availability, Availability of all testing tools, Accessibility of the testable code. Availability of the test data is also of one of the major criteria. It should be prepared, or acquired if there are dependencies on other teams for test data. From testing perspective, all the test cases have to be completed, reviewed, and signed off. Let us now discuss the exit criteria in the next screen.

5.20 Exit Criteria

Exit Criteria define the conditions to be met before testing can be considered as complete. Exit criteria indicate that the software is up to the required quality and can be deployed into production. Focus points for exit criteria are as follows: Thoroughness measures, such as coverage of code, functionality, or risk. Estimation of defect density or reliability measures, Cost or budget; Residual risks, such as defects not fixed or lack of test coverage in certain areas, and Schedules like time for marketing. We will discuss Test Estimation in the next screen.

5.21 Test Estimation

Test effort is the effort required to perform a testing task in either person-days or person-hours. For the success of any project, test estimation and proper execution are equally important. There are two ways to calculate the estimates. Estimating technique based on metrics collected from previous similar projects is called metrics-based approach and technique based on expertise in a given area or by the owner of the task is called an expert-based approach. Starting at the highest level, a testing project can be broken down into phases using the fundamental test process identified in the ISTQB Syllabus: planning and control; analysis and design; implementation and execution; evaluating exit criteria and reporting; and test closure. Within each phase activities can be identified and within each activity tasks and occasionally subtasks can be identified. To identify the activities and tasks, you can work both forward and backward. Work forward means start with the planning activities and move forward in time step by step. Working backward means consider the identified risks, and note those risks, to be addressed through testing. Let us now identify the factors impacting test efforts.

5.22 Factors Impacting Test Efforts

Testing is a complex process and a variety of factors can influence it. While creating test plans and estimating the testing effort and schedule, these factors must be kept in mind. Factors can be broadly classified into product characteristics and characteristics of the development process. Product characteristics like complexity of the software impacts the testing efforts. Highly complex software requires more test effort. The importance of non-functional quality characteristics, such as usability, reliability, security, performance also influence the testing effort. If the number of platforms to be supported is high, this will increase the test effort as the application needs to be tested across all these platforms. As a development process characteristics, clearly documented requirements help in defining tests more efficiently, thus, reducing the rework effort. Unskilled resources add more effort to the test cycle and hence impact the test estimates. The more number of defects the higher the test effort is likely to be. Stability of processes, tools and techniques used in the test process is another factor that impact the test efforts. When these factors are not met it leads to high test efforts. Let us now look at Test Strategy and Test approach in the next screen.

5.23 Test Strategy and Test Approach

Test strategy is a high-level description document of the test levels to be performed for an organization or program. Test approach is the implementation of the test strategy for a specific project. It is developed by the Project Manager and defines the “Testing Approach” to achieve the testing objectives. It is derived from the Business Requirement Specification document and created to inform Project Managers, Testers, and Developers about the key testing objectives. It also includes the methods of testing new functions, total time and resources required for the project, and the testing environment. Test approach includes the decisions made based on the test project goal, risk assessment outcomes, test process starting points, test design techniques, exit criteria, and test types. Though test strategy and test approach are seen as sequential activities, test approach is identified during the test strategy. Sometimes test approach might be included within the test strategy document. We will discuss the components of a test strategy document in the following screen.

5.24 Components of Test Strategy Document

A test strategy document typically has the following components: Scope and objective of testing, which clearly defines all the testable and non-testable items. Business issues to be addressed during testing. Responsibilities of different roles in testing. Communication protocol, frequently status reporting including benchmark figures, and test deliverability lists with all artifacts to be delivered to the client. Industry standards to be followed such as metrics. Test automation and tools. Testing measurements and metrics used to measure the testing progress. Any foreseen risks and mitigation plans. Defect reporting and tracking, which define the defect management process and defect management tools. Change and configuration management, which is used to list all the configurable items. Training plan which plays a vital role when third part testing is involved. In the next screen, we will discuss an example of High level Test Strategy.

5.25 High level Test Strategy—Example

For example, in an upcoming Maintenance Test Release, due to the nature of fixes, it has been decided to focus on regression testing. Considering the expanse of regression test scenarios, testing should use automated test scenarios to the maximum. Due to the migration from internal server to the cloud, performance testing scenarios for ‘Component X’ should be thoroughly executed. Due to the dependency on vendor PQR for issues related to Module B, the module needs to be tested first in the order of priority so that any issues related to this can be passed on to the vendor for closure.

5.26 Typical Test Approaches

Let us discuss different test approaches that can be used for test planning. Analytical approaches—all analytical test strategies use some formal or informal analytical technique, during the requirements and design stages of the project. Risk-based testing where testing is directed to greatest risk areas is an example of analytical approach. Model-based approaches—testing takes place based on mathematical models for loading and response for e-commerce servers. Model-based approaches; such as stochastic testing use statistical information about failure rates, such as reliability growth models; or usage, such as operational profiles. Methodical approaches—methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts and adapted significantly from outside ideas. Methodical approaches, such as failure-based, including error guessing and fault attacks; experience-based, checklist-based, and quality characteristic-based. Process- or standard-compliant approaches—these strategies have in common a reliance on an externally developed approach to testing. Process- or standard-compliant approaches are specified by industry-specific standards or the various Agile methodologies. In the next screen we will continue discussing typical test approaches.

5.27 Typical Test Approaches (contd.)

Dynamic approaches—a lightweight set of testing guidelines that focus on rapid adaptations or known weaknesses in software. Dynamic strategies, such as exploratory testing, concentrate on finding maximum defects possible during test execution and adapting to the realities of the delivered test system, and they emphasize on later stages of testing. Consultative or directed approaches—here, test coverage is driven primarily by the advice and guidance of external technology or business domain experts. Consultative or directed strategies commonly rely on a group of non-testers to guide or perform the testing, and emphasize the later stages of testing due to the lack of recognition of the value of early testing. Regression-averse approaches—include the reuse of existing test material, extensive automation of functional regression tests, and standard test suites. Regression-averse strategies commonly have a set of usually automated procedures that allows them to detect regression defects. These strategies may involve automating functional tests prior to release of the function, in which case it requires early testing. However, sometimes, as a form of post-release test involvement, the testing is entirely focused on the released testing functions. Let us discuss the selection of the right test approach in the following screen.

5.28 Selecting a Test Approach

The choice of test approaches or strategies is an important factor in the success of the test effort and the accuracy of the test plans and estimates. Now let us look at the factors to consider before selecting the right test approach. Risk—testing is about risk management, hence risk and its level have to be considered. For a well-established application, regression is an important risk. For a new application, a risk-based analytical strategy may reveal different risks. Skill—strategies must be chosen and executed to consider the skills and experience of the Testers. A standard-compliant strategy is best in case of time constraint and lack of skill to create customized approach. Objective—testing must fulfil the needs of stakeholders. For example, in an independent test lab, if the objective is to find maximum defects with a minimal amount of time and effort invested, then the right approach is dynamic strategy. Regulation—sometimes along with stakeholders, regulator’s needs also have to be fulfilled. This includes the internal and external regulations for the development process. In this case, a methodical test strategy needs to be devised. Product—the nature of the product or project plays an important role in deciding the approach. Some products, such as weapons systems and contract-development software, tend to have well-specified requirements. These lead to synergy with a requirements-based analytical strategy. Business—Business considerations and continuity are important. If a legacy system is used as a model for a new system, a model-based strategy can be used. Let us move on to the third topic, ‘Test Progress Monitoring and Control’, in the following screen.

5.29 Test Progress Monitoring and Control

In the next few screens we will understand the concept of test progress monitoring, define its related terms, identify the common test metrics, and understand test reporting and control. Let us understand the concept of test progress monitoring in the next screen.

5.30 Test Progress Monitoring

Planning for tasks is important, however it is not the only factor for a successful project. The testing work has to be tracked. Test progress monitoring is a test management task that periodically monitors the status of a test project. Metrics, which measures actual progress against the planned milestones, is used to monitor the test progress. Test progress monitoring also gives visibility and feedback on test activities. Couple of factors which indicates the status of test activities are: the level of test plan and test case completion; the tested object and its pass, fail, and blocked data; the quantity of testing yet incomplete; number of open defects; and the amount of retesting and regression testing required.

5.31 Test Monitoring—Definitions

Let us now look at few definitions that are important to understand test monitoring: Failure rate can be defined as the ratio of the number of failures of a given category to a given unit of measure. For example, failures per unit of time, per number of transactions, and per number of computer runs. In the case of test case failure rate, it is calculated as number of failed test cases divided by number of test cases executed. Defect density can be defined as the number of defects identified in a component or system, divided by the size of the component or system. Defect density can be expressed in standard measurement terms such as, lines of code, number of classes, or function points. The formula for calculation of defect density is number of defects divided by number of function points tested. Test monitoring is a test management task that periodically checks the status of a test project. Reports are prepared that compare the actual to planned progress. In the following screen, let us look at the different test metrics which are commonly used to monitor test activities.

5.32 Common Test Metrics

The commonly used test metrics are as follows: Test coverage is a popular metric used for test monitoring. It covers the extent of coverage achieved against requirements, risks, or code. Formula for this metric is the number of requirements tested against the total number of requirements. Higher the coverage, better the quality of testing. Percentage of test case preparation helps the Manager identify the extent of preparedness for testing. This is calculated as number of test cases prepared, divided by total number of planned test cases for preparation. Percentage of test environment preparation is calculated as amount of Test Environment preparation complete divided by the total amount of preparation required. It is also a helpful indicator to gauge the preparedness of testing effort. Percentage of test case execution, is calculated as the number of test cases executed, divided by the total number of planned test cases to be executed. This is an indicator of the amount of test execution progress achieved. We will continue our discussion on the common test metrics in the next screen.

5.33 Common Test Metrics (contd.)

Defect information such as defect density, defects found, open and fixed defects are useful metrics for evaluating the software stability and production readiness. Defect metrics are also used as an indicator to gauge the overall health of the development process. Defect density is calculated as the total number of defects divided by the total number of modules. Confidence level of Testers in the application or product can be captured through surveys or voting. Test milestones dates are set as a part of the test plan. These need to be monitored to measure any schedule slippages. Testing costs, includes cost compared to the benefit of finding the next defect or to run the next test. It is usually calculated as the amount of cost spent in testing including the cost of manpower, environments, and associated costs. These should be constantly monitored to ensure that testing does not exceed the budget or to evaluate when to stop testing. Let us discuss an example of test metrics, in the next screen.

5.34 Test Metrics—Example

For example, the dashboard on the screen represents the status of test execution on a specific date. It summarizes the percentage of test completion and of test cases that passed and failed. The percentage completion of tests in this current cycle is 84.6%. This could be a cause of concern for the management if the product release date is imminent. The biggest concern lies in the area of test conditions that are on hold or deferred. Till these conditions are released for testing, the test coverage would not be complete. In the next screen we will understand the concept of test reporting.

5.35 Test Reporting

Test summary reports are submitted for testing period, generally at the logical conclusion of testing such as at the end of each phase or test project. Test Leaders need to generate multiple reports during the test planning, design, and execution phase to report the progress of test activities. These reports keep all stakeholders informed and help the Test Leader get attention or resources to resolve project risks. Based on the requirement and complexity of the project, test team maintain different reports like daily, weekly, monthly, and even quarterly status reports. Test summary report should have recommendations and decisions for future actions, based on the metrics collected. These reports include lessons learnt, which helps in preventing repetitive mistakes for future phases and, or projects. We have seen the different types of metrics and reports a test team needs to manage. In the following screen, let us now understand the purpose of managing them.

5.36 Requirement of Test Metrics

Metrics should be collected at the end of a test level to assess adequacy of the test objectives, adequacy of the test approaches taken, and effectiveness of the testing with respect to the objectives If one or more of the above parameters are judged inadequate, tests need to be re-planned. This cycle works iteratively till all the parameters are adequately met. In the next screen we will discuss test control.

5.37 Test Control

Test control is a test management task dealing with development and application of a set of corrective actions to get a test project on track, when monitoring shows a deviation from plan. Actions may cover any test activity and may affect other software life cycle activity or task. For example, an organization usually conducts performance testing on weekday evenings, during off-hours, in the production environment. Due to unanticipated high demand for products, the company has temporarily adopted an evening shift that keeps the production environment in use 18 hours a day, five days a week. This increase in production time reduces the time available for conducting performance testing. This is a risk for the performance testing team. As mitigation to the risk take corrective action and the test control. This may involve rescheduling the performance tests to the weekend to ensure zero impact on testing schedule. Regular monitoring of risks and test metrics, therefore helps the project remain on track to meet the test objectives. Let us move on to the next topic, ‘Configuration Management’, in the following screen.

5.38 Configuration Management

In the next few screens we will look at the concept of configuration management, its objectives, and its role in testing. Let us take an overview of configuration management in the next screen.

5.39 Overview of Configuration Management

Configuration management is a disciplined approach in the management of software and the associated design, development, testing, operations and maintenance of testing. It involves the following steps: Planning and identification, control, status accounting, and verification and audit activities. Planning and identification activity involves planning entire configuration management and identifying configurable items. Control is about controlling releases and changes to configurable items. Status accounting involves recording and reporting the status of configurable items. Auditing verifies the completeness and correctness of configurable items. Depending on the roles defined and associated access right during planning activities, users can find read, edit, and delete option, which may also involve approval process. This process is an integral step in all the steps of configuration management. Purpose of configuration management is to establish and maintain the integrity of the products including components, data and documentation of the software or system through the project and product life cycle. In the next screen, we will discuss the objectives of configuration management.

5.40 Objectives of Configuration Management

Objectives of Configuration Management are to: provide accurate information on time, to the right person, at the right place; support processes like incident and change management; eliminate duplication of data and effort; and achieve project management in a cost effective way and with improved quality. Let us now discuss how configuration management supports testing.

5.41 Configuration Management in Testing

Configuration management has a number of important implications for testing. It allows the testers to manage their test ware and test results using the same configuration management mechanisms. Configuration management supports the build process, which is essential for delivery of a test release into the test environment. Sending e-mail Zip archives are not sufficient, as they there is a chance of pollution of archives with undesirable contents such as previous versions of items. It is vital to have a solid, reliable way of delivering test items that are the proper version and works well, especially in later phases of testing like System testing or User acceptance testing. As seen in the image on the screen, configuration management also allows us to map what is being tested to the underlying files and components that make it up which is absolutely critical. For example, when reporting defects, they are needed to be reported against a test case or a requirement which is version controlled. If all the required details are NOT mentioned clearly, developers will have tough time in fixing the defects. The reports discussed earlier must be traceable to what was tested. Ideally, when testers receive an organized, version-controlled test release from a change-managed source code repository, it is accompanied by a release notes. Release note may not always be so formal and do not always contain all the information. Please refer IEEE 829 for guideline on what a release report should have. During the test plan stage ensure that configuration management procedures and tools are selected. As the project proceeds, the configuration process and mechanisms are implemented, and the key interfaces to the rest of the development process are documented. During test execution time, this will allow project team to avoid unwanted surprises like testing the wrong software, receiving un-installable builds and reporting irreproducible defects against versions of code that don't exist anywhere but in the test environment. Let us move on to the next topic, ‘Risk and Testing’, in the following screen.

5.42 Risk and Testing

In the next few screens we'll discuss how to determine the level of risk using likelihood and impact along with the various ways to conduct risk analysis and management. Let us discuss the concept of risk and testing in the next screen.

5.43 Risk and Testing

Commonly used terms in risk management are as follows: Risk is an event or situation that could result in undesirable consequences or a potential problem. Exposure is the amount of loss incurred if an undesirable event occurs. For example, a car accident can cause both loss of life and property. Threat is a specific event that may cause an undesirable event to occur. For example, driving under the influence of alcohol poses a threat that accidents might occur. Control is an action that reduces risk impact. For example, to mitigate the threat of an accident, control measure taken of not driving in an intoxicated state. The likelihood of a risk occurrence can be between 0 and 100 percent; a risk cannot be a certainty. To put risk management at a high level, involves assessing the possible risks, prioritizing them, deciding on the risks to be addressed, and implementing controls to address the issues. In the next screen we will look at the project risks.

5.44 Project Risks

Risks related to the management and control of project that impacts project's capability to deliver its objectives is known as project risks. Project risks can be divided into three main categories—technical issues, organizational factors, and supplier issues. Risks under technical issues are requirements not clearly defined, requirements not meeting the technical feasibility, Low quality of design and code, improper or inefficient technical planning, and availability of test environment. Risks under organizational factors are resource issues or skill shortage though resources are available, training issues, communication problems, improper attitude or wrong expectations. Risks under supplier issues are contractual issues or third party failure risks. All these risks are possible in a project and hence they need to be identified and mitigated effectively. Let us now look at the product based risks in the next screen.

5.45 Product-Based Risks

Product based risk can be defined as the possibility of a system or software failing to satisfy reasonable customer, user, or stakeholder expectation; which would challenge the quality of the product. Common product based risks can be failure-prone software delivered, software or hardware potentially harmful to an individual or company, poor software characteristics such as functionality, reliability, usability, and performance; poor data integrity and quality such as data migration issues, data conversion problems, data transport problems, and violation of data standards; and software not performing its intended functions. Let us continue this discussion in the next screen.

5.46 Product-Based Risks (contd.)

There are four options of addressing product or project risks. Mitigate the risk by taking advance steps to reduce the possibility and impact of the risk. Plan for contingency which means there should be a plan in place to reduce risk impact. Transfer the risk by convincing another team member or project stakeholder to reduce the probability of occurrence, or accept the risk impact. Ignore the risk by not taking any action, this is a plausible option if effective action cannot be taken or probability and risk impact is low. There is another risk-management option, buying insurance, which is not usually pursued for project or product risks on software projects, though it is not uncommon. In the next screen, we will find out how software can act as a controller.

5.47 Testing as Risk Controller

In general, apart from functionality software might have problems related to other specific quality characteristics, such as security, reliability, usability, maintainability, or performance. Risks are used to decide the testing starting point and focus area; testing helps to reduce risk of an adverse effect occurring, or its impact. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans. We will understand risk-based testing in the next screen.

5.48 Risk-Based Testing

Risk-based testing is the idea to organize the testing efforts in a way that reduces the residual level of product or project risk when the system is shipped to production. Risk-based testing starts early in the project and uses risk to prioritize and emphasize the appropriate tests during its execution. Identify system quality risks and use that knowledge to guide testing planning, specification, preparation, and execution. Risk-based testing involves both mitigation and contingency. It also involves measuring, finding, and removing defects in critical areas; risk analysis to identify proactive opportunities to remove, or prevent defects through non-testing activities; and to help in test activities selection. Let us continue this discussion on risk-based testing, in the next screen.

5.49 Risk-Based Testing (contd.1)

Risk-based testing starts with product risk analysis. One technique is reading the requirements specification, design specifications, user documentation, and other items thoroughly. Another technique is brainstorming with many project stakeholders. Sequence of one-on-one or small-group sessions with the business and technology experts in the company can be used as another technique. A team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach. As team-based approach relies on the knowledge, wisdom and insight of the entire team to determine a test overview. Let us continue this discussion on risk-based testing, in the next screen.

5.50 Risk-Based Testing (contd.2)

A risk-based approach is testing oriented and explores and provides information about product or project risks at the initial stage of a project. It involves the identification of product risks and their use in guiding test planning and control, specification, preparation, and execution of tests. Risks identified guides in test planning, specification, design, and execution. During test planning, the risk helps to understand the test techniques, testing needs, prioritize testing in an attempt to find the critical defects as early as possible, and determine whether any non-testing activities could be employed to reduce risk. An example of non-testing activity is providing training to inexperienced designers.

5.51 Risk Based Testing—Example

Let us look at an example to illustrate the concept of risk based testing in this screen. Risk based testing starts with the identification of the risks in the system, and planning testing activities to mitigate the risks. The biggest threat to an online banking software is the security of the application. Any unauthorized access can compromise the integrity of personal data. It can lead to unlawful fund transactions causing huge financial losses to customers. While planning testing for such applications, there is a huge emphasis on security testing. Security tests need to be conducted on each layer of the application, and even the physical servers. In addition, key elements of security in the software design needs to be tested. If there are examples of security breaches of similar applications in other banks, then the test team should plan for specifically testing these scenarios. Let us move on to the next topic, ‘Incident Management’, in the following screen.

5.52 Incident Management

In the next few screens we look at the process of documenting and managing the incidents test execution incidents. We will also understand the procedure of reporting incidents and defects. Let us discuss the overview of incident management in the next screen.

5.53 Overview of Incident Management

Incident management is the process to ensure that incidents are tracked through all the steps of incident lifecycle. Effective incident management requires well defined process and classification rules. Any deviation of actual from expected results is termed as a defect. The name of this deviation varies from one organization to the other. For example, incidents, bugs, defects, problems, and issues. Any incident once accepted as a valid defect is termed as a bug. The steps of its lifecycle include incident logging, classification, correction, and confirmation of the solution. Recording the occurred incident details is known as incident logging. Incidents can be reported during the development, review, or use of a software product. These can be against issues in code or any type of project documentation deviation. In the next screen we will look at the objective of an incident report.

5.54 Incident Report Objective

Incident report or a defect report, is a formal record of each incident. The objectives of incident report should be to provide developers with feedback on the problem to enable identification, assist in isolation and correction of incidents as necessary, track quality of the system and progress of testing, and provide ideas for test process improvement. One common objective for such reports is to provide programmers, managers and others with detailed information about the behavior observed and the defect. Another is to support the analysis of trends in defect data aggregate, either for better understanding of a specific set of problems or tests; or for understanding and reporting the overall system quality level. Let us now discuss the contents of an incident report in the next screen.

5.55 Incident Report Contents

Any incident raised is classified based on business or system impact, also known as severity. Another classification is based on the urgency for solution, also known as priority. Apart from them an incident report includes detail like Unique Identifier; Date and Author, which is usually auto generated by test or defect management tool; summary of incident, detailing where the expected and actual results differ. It will also include steps to reproduce the defect, actual results, expected results, environment in which defect is detected, impact on the progress, anomalies if any, and any additional notes or comments from the Tester. An incident report also contains a description and classification of the observed misbehavior. Let us continue this discussion in the next screen.

5.56 Incident Report Contents (contd.)

Finally, defect reports, when analyzed over and across projects, give information that can lead to the development and test process improvements. The programmers need the report information to find and fix the defects. Before this step, managers should review and prioritize the defects. Since some defects may be deferred workarounds and other helpful information should be included for help desk or technical support teams. Testers often need to know their colleague’s test results so that they can watch for similar behavior elsewhere and avoid trying to run tests that will be blocked. A good incident report is a technical document. Any good report results from a careful approach to researching and writing. There are some thumb rules that can help in writing a better incident report. They are as follows: Description of the incidents should be as factual as possible and should include enough detail in them to help the developer replicate the defect. Incidents should not be raised against individuals but against the system under test. The goal of the report should be to ensure the developer is able to identify and fix the defect without causing other defects in the process. In the following screen, we will look at the lifecycle of an incident.

5.57 Incident Lifecycle

As seen in the image on the screen, any incident starts with a Reported status, which is when a deviation is first noticed in any of the testable items and is documented in the incident management tool. Reported incidents are then verified to check they are valid defects. Invalid defects are moved to Rejected status. Once the incident is identified as a valid one, it is moved to Opened status. An incident might also go to Deferred status if it cannot, or need not be fixed right away. The open defect may need immediate resolution or can be deferred to future releases. Defects to be resolved are assigned to appropriate resource and are fixed. Once the Incident is assigned to a specific person or team, its status changes to Assigned. The expected resolution time depends on the severity and priority of the defect. Fixed incidents are retested by incident owner and are Closed after its successful fix. If the incident has not been successfully fixed, it is Reopened. Similar to an open incident, reopened defects can also be assigned for resolution or can be deferred. With this, we have reached the end of the lesson. Let us now check your understanding of the topics covered in this lesson.

5.58 Quiz

A few questions will be presented in the following screens. Select the correct option and click Submit to see the feedback.

5.59 Summary

Here is a quick recap of what we have learned in this lesson: The quality and effectiveness of testing increases with an increase in the degree of independence. The factors to be considered while building a test plan are test policy of organization, scope of testing, and objectives of testing. Test progress monitoring is a test management task that periodically monitors the status of a test project. Configuration Management is a disciplined approach in the management of software and the associated design, development, testing, operations and maintenance of testing. Risks related to the management and control of project that impacts project's capability to deliver its objectives is known as project risks. Incident management is the process to ensure that incidents are tracked through all the steps of incident lifecycle.

5.60 Conclusion

This concludes the fifth lesson of the course, ‘Test Management’. The next lesson is, ‘Tools Support for Testing’.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Request more information

For individuals
For business
Phone Number*
Your Message (Optional)
We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Phone Number*
Job Title*