Welcome to lesson 7 of the ITIL Intermediate RCV tutorial, which is a part of ITIL Intermediate RCV Foundation Certification course. This lesson covers the Change Evaluation process of service transition and how it contributes to RCV.
Let us look at the objectives of this lesson.
By the end of this ‘Change Evaluation’ lesson, you will be able to:
Understand the overview of the purpose, objectives, scope, and importance of change evaluation as a process.
Explain the Change evaluation policies, principles, concepts, activities, methods, and techniques in relation to RCV practices.
Discuss the evaluation of predicted and actual service performance, and their relation to risk management.
In the next section, we will look at the purpose and objective of the change evaluation process.
The purpose of the evaluation is to provide a consistent and standardized means of determining the performance of service change in the context of existing and proposed services, and IT infrastructure. The actual performance of a change is assessed against its predicted performance, and any deviations between the two are understood and managed.
The objective of the evaluation is to:
Set stakeholder expectations correctly and provide effective and accurate information to change management to make sure that changes which adversely affect service capability and introduce risk are not transitioned unchecked
Evaluate the intended effects of a service change, and as much of the unintended effects as is reasonably practical given capacity, resource, and organizational constraints
Provide good quality outputs from the evaluation process so that Change Management can expedite an effective decision about whether a service change is to be approved or not.
Assist with general information, complaints or comments.
Let’s look at the scope of change evaluation in the next section.
Specifically, in this section, we consider the evaluation of new or changed services defined by Service Design, during deployment, and before final transition to service operations.
The importance of evaluating the actual performance of any service change against its anticipated performance is an important source of information to service providers to help ensure that expectations set are realistic, and to identify that if there are any reasons that production performance does not meet what was expected.
As we have an understanding of the purpose, objective, and scope of change evaluation, let us now discuss change evaluation as value to the business.
Change Evaluation is, by its very nature, concerned with value. Specifically, the effective evaluation will establish the use made of resources in terms of delivered benefit, and this information will allow a more accurate focus on value in future service development and Change Management.
There is a great deal of intelligence that Continual Service Improvement can take from evaluation to analyze future improvements to the process of change and the predictions and measurement of service change performance.
In the next section, we will learn about the policies of the evaluation process.
The following policies apply to the evaluation process:
Service Designs or service changes will be evaluated before being transitioned.
Any deviation between predicted and actual performance will be managed by the customer or customer representative by
- accepting the change even though actual performance is different to what was predicted
- rejecting the change, or
- requiring a new change to be implemented with revised predicted performance agreed in advance. No other outcomes of evaluation are allowed.
An evaluation shall not be performed without a customer engagement package.
Now, let’s move on to discuss the principles of this process in the next section.
As far as it is reasonably practical, the unintended, as well as the intended effects of a change, need to be identified, and their consequences understood and considered. This includes effects on other services or shared infrastructure as well as the effects on the service being changed.
A service change will be fairly, consistently, openly, and, wherever possible, objectively evaluated. An evaluation report, or interim evaluation report, will be provided to change management to facilitate decision-making at each point at which authorization is required.
In the next section, let us learn about the Deming cycle of PDCA followed in the change evaluation process.
PDCA is a four-stage cycle for process management, attributed to Edward Deming. Plan-Do-Check-Act is also called the Deming Cycle.
Plan means to design or to revise processes that support the IT services.
Do means implement the plan, and manage the processes.
Check means measure the processes, and IT services, compare with objectives and produce reports;
Act means plan, and implement changes to improve the processes.
The change evaluation process uses the Plan—Do—Check—Act (PDCA)model to ensure consistency across all evaluations. Each evaluation is planned and then carried out in multiple stages, the results of the evaluation are checked, and actions are taken to resolve any issues found.
Let us learn few of the terms used in change evaluation in the next section.
We have discussed earlier that every process will come with a set of terminologies. For change evaluation, here we will discuss different terminologies of change evaluation which we will predominantly use very frequently henceforth.
The table below lists the terms and their meanings:
Term |
Meaning |
Actual performance |
The performance achieved following a service change. |
Countermeasure |
The mitigation that is implemented to reduce risk. |
Evaluation report |
A report generated by the change evaluation process, which is passed to change management and which comprises:
|
Performance |
The utilities and warranties of a service. |
Performance model |
A representation of a service that is used to help predict performance. |
Predicted performance |
The expected performance of a service following a service change. |
Residual risk |
The remaining risk after countermeasures has been deployed. |
Service capability |
The ability of a service to perform as required. |
Service change |
A change to an existing service or the introduction of a new service. |
Test plan and results |
The test plan is a response to an impact assessment of the proposed service change. Typically the plan will specify how the change will be tested; what records will result from testing and where they will be stored; who will authorize the change; and how it will be ensured that the change and the service(s) it affects will remain stable over time. The test plan may include a qualification plan and a validation plan if the change affects a regulated environment. The results represent the actual performance following implementation of the change. |
Now, in the next section, let us look at the process flow of change evaluation.
The diagram given below shows the change evaluation process flow. It depicts the key inputs and outputs of the process.
Moving on to the next section, let us look at the evaluation plan.
Evaluation of a change should be carried out from a number of different perspectives to ensure any unintended effects of a change are understood as well as the intended effects. Generally speaking, we would expect the intended effects of a change to be beneficial. The unintended effects are harder to predict, often not seen even after the service change is implemented, and frequently ignored.
Additionally, they will not always be beneficial, for example in terms of impact on other services, impact on customers and users of the service, and network overloading. Intended effects of a change should match the Acceptance Criteria.
Unintended effects are often not seen until pilot stage or even once in production; they are difficult to measure, and very often not beneficial to the business.
To understand this concept, let us discuss them in details in the next sections.
Let us understand the intended effect of a change. The details of the service change, customer requirements, and Service Design package should be carefully analyzed to understand the purpose of the change and the expected benefit from implementing it fully.
Examples might include:
reduce the cost of running the service
increase service performance
reduce resources required to operate the service
improve service capability
The change documentation should make clear what the intended effect of the change will be, and specific measures that should be used to determine the effectiveness of that change. If they are in any way unclear or ambiguous, the evaluation should cease, and a recommendation not to proceed should be forwarded to Change Management.
Even some deliberately designed changes may be detrimental to some elements of the service. For example, the introduction of SOX-compliant procedures, which, while delivering the benefit of legal compliance, introduce extra work steps, and costs.
Let us now move on to understand the unintended effect of a change.
Now, let us understand the unintended effect of a change. In addition to the expected effects on the service, and broader organization there are likely to be additional effects which were not expected or planned for. These effects must also be surfaced and considered if the full impact of a service change is to be understood.
One of the most effective ways of identifying such effects is by a discussion with all stakeholders. Not just customers, but also users of the service, those who maintain it, those who fund it, etc.
Care should be taken in presenting the details of the change to ensure stakeholders fully understand the implications and can, therefore, provide accurate feedback.
Let us move to the next section and learn about the factors for considering the effect of a service change.
The table shown below depicts the factors to be included when considering the effect of a service change.
Factor |
Evaluation of service design |
S–Service provider capability |
The ability of a service provider or service unit to perform as required. |
T –Tolerance |
The ability or capacity of service to absorb the service change or release. |
O -Organizational setting |
The ability of an organization to accept the proposed change. For example, is appropriate access available for the implementation team? Have all existing services that would be affected by the change been updated to ensure a smooth transition? |
R–Resources |
The availability of appropriately skilled and knowledgeable people, sufficient finances, infrastructure, applications, and other resources necessary to run the service following the transition. |
M–Modelling and measurement |
The extent to which the predictions of behavior generated from the model match the actual behavior of the new or changed service. |
P–People |
The people within a system and the effect of the change on them. |
U–Use |
Will the service be fit for use? Will it be able to deliver the warranties? Is it continuously available? Is there enough capacity? Will it be secure enough? |
P–Purpose |
Will the new or changed service befit for purpose? Can the required performance be supported? Will the constraints be removed as planned? |
Using customer requirements (including Acceptance Criteria), the predicted performance, and the performance model, a risk assessment is carried out. If the risk assessment suggests that predicted performance may create unacceptable risks from the change or not meet the Acceptance Criteria, an interim evaluation report is sent to alert Change Management.
The interim evaluation report includes the outcome of the risk assessment, and the outcome of the predicted performance versus Acceptance Criteria, together with a recommendation to reject the service change in its current form. Evaluation activities cease at this point pending a decision from Change Management.
Moving on, let us discuss the evaluation of actual service performance in the next two sections.
Before change management makes a decision on whether to authorize each step in a change, change evaluation will evaluate the actual performance. The extent to which actual performance can be evaluated depends on how far through the change lifecycle the evaluation is performed.
The results of this evaluation are sent to change management in the form of an interim evaluation report. This interim evaluation report includes the outcome of the risk assessment, and the outcome of the actual performance versus acceptance criteria, together with a recommendation on whether to authorize the next step.
Once the service change has been implemented, a report on actual performance is received from operations. Using customer requirements (including acceptance criteria), the actual performance, and the performance model, a risk assessment is carried out.
Let’s continue this in the next section.
If the risk assessment suggests that the actual performance is creating unacceptable risks, an interim evaluation report is sent to change management. The interim evaluation report includes the outcome of the risk assessment, andthe outcome of the actual performance versus acceptance criteria, together with a recommendation to remediate the service change.
Evaluation activities cease at this point pending a decision from change management. If, however, the risk assessment shows an acceptable level of risk then an evaluation report is sent to change management.
As we have begun to discuss risk here, let us understand risk management in the next section.
Preparing for a career in IT? Check out our Course Preview now!
There are two steps in risk management: risk assessment and mitigation.
Risk assessment is concerned with analyzing threats, and weaknesses that have been or would be introduced as a result of a service change. Risk occurs when a threat can exploit a weakness. The likelihood of threats exploiting a weakness, and the impact if they do, are the fundamental factors in determining risk.
The risk management formula is simple but very powerful:
Risk = Likelihood x Impact
Obviously, the introduction of new threats and weaknesses increases the likelihood of a threat exploiting a weakness. Placing greater dependence on a service or component increases the impact if an existing threat exploits an existing weakness within the service.
These are just a couple of examples of how risk may increase as a result of a service change. It is a clear requirement that a proposed service change must assess the existing risks within a service, and the predicted risks following implementation of the change.
If the risk level has increased, then the second stage of risk management is used to mitigate the risk. In the examples given above mitigation may include steps to eliminate a threat or weakness, and using disaster recovery, and backup techniques to increase the resilience of a service on which the organization has become more dependent.
Following mitigation, the risk level is re-assessed and compared with the original. This second assessment and any subsequent assessments are in effect determining residual risk – the risk that remains after mitigation. Assessment of residual risk and associated mitigation continues to cycle until the risk is managed down to an acceptable level.
The guiding principle here is that either the initial risk assessment or any residual risk level is equal to or less than the original risk prior to the service change. If this is not the case, then the evaluation will recommend rejection of proposed service change, or back out of an implemented service change.
The risk management deviations are considered to be predicted performance and actual performance. Let us look into the details.
PREDICTED VS. ACTUAL PERFORMANCE
Once the service change passes the evaluation of predicted performance and actual performance, essentially as standalone evaluations, a comparison of the two is carried out
To have reached this point ¡t will have been determined that predicted performance and actual performance are acceptable and that there are no unacceptable risks.
The output of this activity is a deviations report—the report states the extent of any deviation between predicted, and actual performance.
Moving on, in the next section, let us learn about risk management test plan, and results.
The testing function provides the means for determining the actual performance of the service following implementation of a service change. The test provides the service evaluation function with the test plan and a report on the results of any testing.
The actual results are also made available to service evaluation. In some circumstances, it is necessary to provide a statement of qualification, and validation status following a change. This takes place in regulated environments such as pharmaceuticals, and defense.
The context for these activities is shown in the figure given below:
The inputs to these activities are the qualification plan and results, and validation plan and results. The evaluation process ensures that the results meet the requirements of the plans. A qualification and validation statement is provided as output.
Let us now proceed to understand the evaluation report and its components in the next section.
The evaluation report contains the following sections.
Risk profile
A representation of the residual risk left after a change has been implemented, and after countermeasures have been applied.
Deviations report
The difference between predicted, and actual performance following the implementation of a change.
A qualification statement (if appropriate)
Following the review of qualification test results and the qualification plan, a statement of whether or not the change has left the service in a state whereby it could not be qualified.
A validation statement (if appropriate)
Following the review of validation test results and the validation plan, a statement of whether or not the change has left the service in a state whereby it could not be validated.
A Recommendation
Based on the other factors within the evaluation report, a recommendation to Change Management to accept or reject the change.
Like any other process, let us look at the change evaluation triggers, inputs, and outputs of the process.
Triggers for the process would be:
Request for Evaluation from Service Transition manager or Change Management
Activity on Project Plan.
Inputs would be:
Service package
SDP, and SAC
Test results and report.
Accordingly, the Output would be:
Interim evaluation report for change management
Evaluation report for Change Management
In the next section, we will look at the change evaluation process interfaces with other lifecycle stages.
Change evaluation is part of the overall process for managing significant service transitions and should work with Transition Planning and Control to ensure that appropriate resources are available when needed and that each service transition is well managed.
The change evaluation process must be tightly integrated with Change Management. There should be clear agreement on which types of change will be subject to formal evaluation, and the time required for this evaluation must be included in the overall planning for the change.
Change management provides the trigger for change evaluation, and the evaluation report must be delivered to change management in time for the Change Authority Board (or other change authority) to use it to assist in their decision-making.
Let’s check out the other interfaces in the next section.
Change evaluation requires information about the service, which is supplied by Service Design coordination in the form of a service design package.
Change evaluation may need to work with Service Level Management or Business Relationship Management to ensure a full understanding of the impact of any issues identified and to obtain use of user or customer resources if these are needed to help perform the evaluation.
Change evaluation requires information from the Service Validation, and Testing process, and must coordinate activities with this process to ensure that required inputs are available in sufficient time. The next section will talk about information management in change evaluation process.
Much of the information required for change evaluation should be available from the SKMS. All evaluation reports should be checked in to the CMS, and softcopy versions of the reports should be stored in the SKMS.
So far, we have discussed the change evaluation triggers, inputs, outputs, interfaces, and information management. Let us now discuss the critical success factors and Key Performance Indicators of this process.
Each organization should identify appropriate CSFs based on its objectives for the process. Each sample CSF is followed by a small number if typical KPIs that support the CSF. These KPIs should not be adopted without careful consideration.
Each organization should develop KPIs that are appropriate for its level of maturity, its CSFs, and its particular circumstances. Achievement against KPIs should be monitored and used to identify opportunities for improvement, which should be logged in the continual service improvement(CSI) register for evaluation, and possible implementation.
The following list includes some sample CSFs for Change Evaluation and their corresponding KPIs:
CSF |
KPI |
Stakeholders have a good understanding of the expected performance of new and changed services |
|
Change management has good quality evaluations to help them make correct decisions |
|
Challenges to change evaluation include:
Developing standard performance measures, and measurement methods across projects, and suppliers
Understanding the different stakeholder perspectives that underpin effective risk management for the change evaluation activities
Understanding, and being able to assess, the balance between managing risk and taking risks as this affects the overall strategy of the organization and service delivery
Measuring and demonstrating less variation in predictions during, and after the transition
Taking a pragmatic and measured approach to risk
Communicating the organization’s attitude to risk, and approach to risk management effectively during risk evaluation
Building a thorough understanding of risks that have impacted or may impact successful service transition of services and releases
Encouraging a risk management culture where people share information.
Let’s look at the risks to this process in the next section.
What are you waiting for? Interested in taking up an ITIL Intermediate RCV Course? Check out our Course Preview!
Risks to change evaluation include:
Lack of clear criteria for when change evaluation should be used
Unrealistic expectations of the time required for change evaluation
Change evaluation personnel with insufficient experience or organizational authority to be able to influence change authorities
Projects and suppliers estimating delivery dates inaccurately, and causing delays in scheduling change evaluation activities.
Let us quickly summarise in the next section.
Here is the recap of the Change Evaluation module:
The importance of evaluating the actual performance of any service change against its anticipated performance is an important source of information to service providers
It is important to fully understand the purpose of the change and the expected benefit from implementing it—unintended effects must also be surfaced and considered
The factors for considering the effects of service change are service provider capability, tolerance, organizational setting, resources, modeling and measurement, people, use, and purpose
There are two steps in risk management: risk assessment, and mitigation
The deviations report states the extent of any deviation between predicted, and actual performance
The next lesson focuses on Knowledge Management.
A Simplilearn representative will get back to you in one business day.