CISSP - Security Assessment and Testing Tutorial

Welcome to the sixth domain of the CISSP tutorial (part of the CISSP Certification Training). This domain covers Security Assessment and Testing.

Let us explore the objectives of this domain in the next section.


After completing this domain, you will be able to:

  • Discuss security assessment and test strategies

  • Describe log management

  • Describe different testing techniques

  • Discuss security testing in the Software Development Life Cycle

  • Describe internal and third-party audits

Let us begin with the importance of Security Assessment and Testing in the next section.

Security Assessment and Testing—Introduction

The goal of security assessment and testing is early identification of technical, operational, and system deficiencies so that appropriate and timely corrective actions can be applied before using the system in the production environment. Creating a test and evaluation strategy involves:

  • Planning for technology development and risk

  • Evaluating the system design against project requirements

  • Identifying where competitive prototyping and other evaluation techniques fit in the process

Let us discuss assessment and test strategies in the next section.

Assessment and Test Strategies

A well-planned and well-executed assessment and test strategy can provide valuable information about risk and risk mitigation.

A security practitioner:

  • Must develop assessment and test strategies supporting development and acquisition programs

  • Can recommend test and evaluation techniques to help in evaluating and managing risks

  • Can formulate plans and procedures to be used by the testing team

The assessment and test strategy is generally executed by a working group known as the Integrated Product Team or IPT consisting of subject matter experts, customer user representatives, and other stakeholders. The strategy should be updated as and when required.

Let us discuss Vulnerability Assessment in the next section.

Vulnerability Assessment

A vulnerability is a flaw, weakness, or loophole in the system security procedures, design, implementation, or internal controls that may result in a security breach or a violation of the system's security policy.

For a system, a vulnerability assessment helps identify, quantify, and prioritize vulnerabilities using various analysis methodologies and tools.

Vulnerability assessments usually consist of the following steps:

  1. Identify the asset or resource.

  2. Assign a quantifiable level of importance to the resources identified.

  3. Identify vulnerabilities in or potential threats to each resource.

  4. Develop a strategy to mitigate or eliminate the most serious vulnerabilities for the most valuable resources.

  5. Define and implement ways to minimize the consequences if an attack does occur.

Let us discuss Penetration Testing in the next section.

Penetration Testing

Penetration testing is the process of determining the true nature and impact of a given vulnerability by exploiting existing vulnerabilities. It is the next level in vulnerability assessment and simulates an actual attack. Other names for this process are ethical hacking, red teaming, tiger teaming, and vulnerability testing.

The penetration testing methodology involves the following steps:

  1. Reconnaissance or Discovery, which involves identifying and documenting information about the target.

  2. Enumeration, which involves gaining more information about the target using intrusive methods.

  3. Vulnerability Analysis, or mapping the environment profile to known vulnerabilities.

  4. Execution, which involves attempting to gain user and privileged access to the target.

  5. Documentation, which involves recording the results of the test as the final step in penetration testing.

The security practitioner must be aware of various security tools used for penetration testing. Some of the most widely used tools include Metasploit, Nessus, and OpenVAS.

Let us discuss Log Management in the next section.

Log Management

Events occurring within an organization’s systems and networks are recorded in what is known as a log.

Apart from records related to computer security, logs are generated from many other sources such as firewalls, security software, intrusion detection and prevention systems, routers, anti-virus software, operating systems, applications, and other networking devices. The large number, volume, and types of log make log management necessary.

The security practitioner must understand the log management process, which involves managing the log lifecycle.

Log management covers the following phases:

  1. Log generation, where the logs generated by hosts’ Operating Systems, security software, and other applications are managed.

  2. Log transmission, where the transmission of log information has to be secured.

  3. Log storage involves managing and storing the large volumes of logging data.

  4. Log analysis involves studying log entries to identify events of interest.

  5. Log disposal involves securely disposing log records at the end of the retention period.

Let us discuss the advantages and challenges of Log Management in the next section.

Preparing for a career in Information Security? Check out our Course Preview on CISSP here!

Log Management—Advantages and Challenges

Log management helps in ensuring confidentiality, integrity, and availability of logs. Logs are very useful in forensic investigations and auditing. Log analysis is essential for identifying security incidents, frauds, and operational issues. Logs are also helpful in establishing baselines and supporting internal investigations.

Challenges in log management are as follows: Due to limited storage space, managing a large volume of logs from numerous sources is difficult at times. Log generation and storage can be further complicated by discrepancies in log contents, timestamps, and formats.

Let us discuss the best practices for log management in the next section.

Log Management—Best Practices

For successful log management, it is essential to:

  • Establish policies and procedures for this purpose.

  • Prioritize requirements for the log management process based on the expected risk reduction, and time and resource requirements.

  • Define appropriate roles and responsibilities for supporting the different functions of the log management process.

  • Create and maintain log management infrastructures such as hardware, software, networks, and media used to generate, transmit, store, analyze, and dispose of log data.

  • Support the staff responsible for log management and ensure that log management for individual systems is performed effectively throughout the organization.

Let us discuss the log management operational process in the next section.

Log Management—Operational Process

A standard log management process involves the following activities:

  • Configure log sources,

  • Perform log analysis,

  • Initiate responses to identified events,

  • Manage long-term storage,

  • Monitor status of log sources,

  • Manage log archiving,

  • Update, patch, and test logging software,

  • Synchronize clocks of logging sources,

  • Manage logging reconfiguration,

  • Document and report, and

  • Consolidate log repositories, such as Security Information and Event Management or SIEM (Read as S-I-E-M) systems.

Let us discuss logged events in the next section.

Logged Events

A large quantity of information is logged from various devices.

Commonly logged information includes requests from the client, response from the server, usage information, account information, successful and unsuccessful authentication attempts, account changes, application startup and shutdown, and failures and critical changes in an application.

Let us take a look at the concept of synthetic transactions in the next section.

Synthetic Transactions

Real User Monitoring or RUM (Read as R-U-M) records all user interaction with a website or client interaction with a cloud-based application or server. RUM is a passive monitoring technology, which determines if users are being served correctly and quickly.

Actions performed on monitored objects in real time are called synthetic transactions. Synthetic performance monitoring is a proactive form of monitoring in which, external agents run scripted transactions against a Web application. Unlike RUM, real user sessions are not tracked in synthetics.

Some examples of synthetic transactions monitoring tools are Microsoft’s System Center Operations Manager and Foglight Transaction Recorder. These tools allow the creation of synthetic transactions that can be used to provide functionalities such as, monitoring websites, databases, and TCP ports.

Let us discuss synthetic transactions further in the next section.

Reasons to Use Synthetic Transactions

Synthetic transactions monitoring or active monitoring consists of synthetic probes and web robots to create reports on system availability and predefined business transactions.

The most common uses of synthetic transactions monitoring are to:

  • Monitor application availability round the clock, even during off hours

  • Check if a remote site is reachable,

  • Check impact on third-party services,

  • Know if an application is down,

  • Measure service-level agreements or SLAs objectively,

  • Monitor cloud services performance and availability,

  • Test Web services,

  • Monitor critical databases queries,

  • Baseline and analyze performance trends across geographies, and

  • Monitor availability during low traffic periods.

Let us discuss Code Review and Testing in the next section.

Code Review and Testing

A coding error can make a system vulnerable and compromise its security entirely.

Code review or peer review is the systematic examination of computer source code. It is intended to identify and fix mistakes that were overlooked in the initial development phase. This improves both the overall software quality and developer skills.

As a best practice, security must be included in all the phases of the Software Development Life Cycle or SDLC. Code reviews often find and remove common vulnerabilities to improve software security.

Software vulnerabilities are mainly caused by insufficient checking of parameters, bad programming, misconfiguration, functional bugs, and logical flaws.

Let us discuss Testing Techniques in the next section.

Testing Techniques

Manual testing and automated testing: In manual testing, the test scenario is guided by a human; therefore, it is slow and tedious. In automated testing, the test scenario is executed at a significantly higher speed by a specialized application.

Black box testing and White box testing

Black box testing is used to test software without knowing the internal structure of the code or program. Whereas, in white box testing the internal structure is known to the tester.

For example, testing the source code of a program.

Static testing and Dynamic testing

In static testing, the software is not yet executed, for example, review of software code. Whereas, dynamic testing is performed while the software is being executed, as in integration tests.

To conduct these tests, the security practitioner must have an understanding of the

  • Type of application

  • Attack surface

  • Technologies supported

  • Quality of results from using different techniques and tools

  • Usability of results

  • Performance and resource utilization, depending on the type of testing technique and tool used.

Let us discuss security testing in the following section.

Security Testing in the SDLC

Security testing is an important consideration in software development and is incorporated in the SDLC as follows:

Planning and Design phase

In Planning and Design phase, The security practitioner conducts an architecture security review to detect architectural flaws in the security standards. The security practitioner also carries out threat modeling, used to identify threats, their impact, and possible countermeasures.

Application Development phase

The Application Development phase involves testing that includes a manual code review and a Static Source Code Analysis or SAST which help to identify insecure codes, misconfigurations, and errors. In other tests, such as a manual binary review and static binary review, compiled software is analyzed as a type of static testing.

In the actual testing phase of the SDLC, the software or application is ready and can be tested dynamically. Some tests are:

  • Vulnerability assessment scanning for applications

  • Manual and automated penetration testing to find vulnerabilities and their corresponding impact

  • Fuzzing to detect software crashes

System Operations and Maintenance phase

For the final phase of the SDLC, System Operations and Maintenance, all the tests performed in the previous phases can be conducted. Important tests are security testing of patches and application updates, and white box or code-based testing, which identifies test cases from available information such as source code, development documents, and design specifications.

Another critical task in this phase is black box testing, or functional testing, which is definition or specification based. This is performed to test various software functionalities without the knowledge of source code or design specifications.

Let us discuss the software product testing levels in the next section.

Software Product Testing Levels

Testing levels are meant to identify missing areas and prevent overlap and repetition between the life cycle phases. The Software Engineering Body of Knowledge or SWEBOK defines three testing levels during the development process—unit, integration, and system level—such that different tests are performed at each of these levels without any specific process model.

Unit testing or component testing

Unit testing or component testing verifies the functionality of a specific section of code, usually at the function level. Individual units or components of a software or system are tested. It helps validate that each unit of the software performs as intended.

Integration testing

In integration testing, individual units are combined and tested as a group for behavior and functionality. It helps expose faults in the interaction between integrated units.

System testing

System testing, or end-to-end testing, tests a completely integrated system or software to verify if it meets requirements.

Let us discuss Misuse Case Testing in the next section.

Misuse Case Testing

The two broad categories of software testing strategies are positive testing and negative testing.

Positive testing

In positive testing, the system is verified using valid forms of input data. This is done to check if, for a valid set of input data, the application behaves as expected. An error encountered during testing means the test has failed.

Negative testing

In negative testing, the system is verified against invalid input data. This is done to check system behavior if wrong or invalid input data is used.

Let us list Misuse Case Testing scenarios in the next section.

Misuse Case Testing—Scenarios

The main purpose of negative or misuse case testing is to check the stability of the software application against the influence of a variety of incorrect validation data. Some misuse case testing scenarios are:

  • Allowed data limits and bounds: This checks the behavior of the application when a value smaller than the lower bound or greater than the upper bound of the specified field is entered.

  • Populating the required fields: This test checks the response of the application when the required fields are not filled.

  • Allowed number of characters: This test checks the behavior of the application when, more characters than what is allowed, are entered into a field.

  • Reasonable data: This test checked the response of the application when data entered into a particular field exceeds a reasonable limit.

  • Web session testing: This test checks the behavior of web browsers, which require a login when the user attempts to open the browser in the tested application without logging in.

  • Correspondence between data and field types: This test checks the behavior of the application when invalid data is entered into the specified field type.

Let us discuss Test Coverage Analysis in the next section.

Test Coverage Analysis

Test coverage involves a set of test cases written against the requirements specification. It is a type of “black-box” testing where it is not necessary to see the code to write the test cases.

Once a document is written describing all the test cases, the test groups may refer to percentages of test cases that were run, that passed or failed, and so on. These are referred to as test coverage metrics. Overall test coverage is often used by QA groups to indicate test metrics and coverage according to the test plan.

Let us focus on Interface Testing in the next section.

Interface Testing

Interface testing checks if the different components of an application or system that is under development pass data and control correctly to one another, and verifies if the interactions between the components work correctly.

It checks if errors are handled appropriately, thus assuring the quality of software products. Interface testing helps to validate if the security requirements are met, or the communication between systems is encrypted. Interface testing is performed by both the testing and development teams.

Let us discuss interface testing further in the next section.

Interface Testing (contd.)

The security practitioner must be aware of some common tests performed on APIs, which include:

  • Check if the APIs are not returning any value

  • Check if the API is triggering some other event or calling another API;, if so, track and verify the event output.

  • Check if the API is updating any data structure.

Let us discuss API testing in the next section.

API Testing

An Application Programming Interface or API specifies how one component should interact with another, and consists of a set of protocols, routines, and tools for building software applications. API testing is performed for a system, which has a collection of API.

For API testing of the system, the following are considered:

  • Verify the boundary conditions and ensure the test harness varies API call parameters in ways that verify functionality and expose failures.

  • Verify calls with two or more parameters by generating more combinations of value-added parameters.

  • Verify API behavior due to the external environmental conditions such as files and peripheral devices.

  • Verify the order of API calls and check if the APIs produce useful results from consecutive calls.

Let us continue to discuss GUI testing in the next section.

GUI Testing

User interface testing is used to identify the presence of defects in a product or software using the Graphical User Interface or GUI (Read as G-U-I). In this technique, the application's user interface is tested for performance. GUI is a hierarchical, graphical front-end to the application, which contains graphical objects with a set of properties.

The security practitioner must be aware of the following characteristics of GUI Testing:

  • During execution, the values of the properties of each object define the GUI state.

  • GUI testing has capabilities to exercise GUI events such as key press or mouse click and can provide inputs to the GUI objects.

  • It strongly depends on the used technology.

Let us discuss common software vulnerabilities in the next section.

Common Software Vulnerabilities

There are many resources available to understand common software vulnerabilities found globally. The security practitioner must refer to these resources to understand the best practices for security.

Let us discuss some of the resources:

The 2011 Common Weakness Enumeration or CWE, or SANS (Read as Sans) Top 25 Most Dangerous Software Errors. This is a list of the most widespread and critical errors that can lead to serious vulnerabilities in software. They are often easy to find and exploit. They are dangerous as they frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.

SANS Critical Security Controls focus on prioritizing security functions that are effective against the latest Advanced Targeted Threats. These controls are transforming security in government agencies and other large enterprises by focusing their spending on the key controls that block known attacks and find the ones that get through.

Open Web Application Security Project or OWASP (Read as O-wasp) is an organization providing unbiased, practical, and cost-effective information about computer and Internet applications.

Project members include a variety of security experts from around the world sharing their knowledge of vulnerabilities, threats, attacks, and countermeasures.

Let us look at a business scenario to understand the purpose of Testing and Evaluation in the next section.

Business Scenario

As modern systems installed at Nutri World Inc. are becoming exponentially more complex and interconnected, especially in software-intensive systems, the traditional "platform-centric" test methodologies have not fared well. However, the systems must still be tested for both performance and regulatory reasons.

Kevin was assigned the responsibility to create a new testing and evaluation strategy to identify, manage, and mitigate risks presented by the new complex and interconnected systems.

Question: What is the fundamental purpose of Test and Evaluation or T and E?

Answer: The fundamental purpose of Test and Evaluation or T and E is providing knowledge that helps in managing the risks involved in developing, producing, operating, and sustaining systems and capabilities.

Let us discuss Information Security Continuous Monitoring in the next section.

Information Security Continuous Monitoring

Information Security Continuous Monitoring or ISCM is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.

Any effort or process intended to support ongoing monitoring of information security across an organization begins with the organizational leadership defining a comprehensive ISCM strategy covering technology, processes, procedures, operating environments, and people.

According to the NIST Special Publication 800-137 the ISCM strategy: “is grounded in a clear understanding of organizational risk tolerance and helps officials set priorities and manage risk consistently throughout the organization; includes metrics that provide meaningful indications of security status at all organizational tiers; ensures continued effectiveness of all security controls; verifies compliance with information security requirements derived from organizational missions/business functions, federal legislation, directives, regulations, policies, and standards/guidelines; is informed by all organizational IT assets and helps to maintain visibility into the security of the assets; ensures knowledge and control of changes to organizational systems and environments of operation; and maintains awareness of threats and vulnerabilities.”

Let us discuss the ISCM strategy and process in the next section.

Information Security Continuous Monitoring—Strategy and Process

To effectively address ever-increasing security challenges, a well-designed ISCM strategy addresses monitoring and assessment of security controls for effectiveness, and security status monitoring. It also incorporates processes to ensure that response actions are taken by findings and organizational risk tolerances and that these responses have the intended effects.

Organizations must take the following steps for developing, implementing and maintaining an ISCM strategy:

  • Define an ISCM strategy based on risk tolerance that maintains clear visibility into assets, awareness of vulnerabilities, up-to-date threat information, and mission or business impacts.

  • Establish an ISCM program determining metrics, status monitoring frequencies, control assessment frequencies, and ISCM technical architecture.

  • © Copyright 2015, Simplilearn. All rights reserved.

  • 732

  • Implement an ISCM program and collect the security-related information required for metrics, assessments, and reporting. Automate collection, analysis, and reporting of data where possible

  • Analyze data collected and report findings, determining the appropriate response. It may be necessary to collect additional information to clarify or supplement existing monitoring data.

  • Respond to findings with technical, management, and operational mitigating activities or acceptance, transference/sharing, or avoidance/rejection.

  • Review and update the monitoring program, adjusting the ISCM strategy and maturing measurement capabilities to increase visibility into assets and awareness of vulnerabilities, to further enable data-driven control of the security of an organization’s information infrastructure, and increase organizational resilience.

Let us discuss the metrics of risk evaluation and control in the next section.

Risk Evaluation and Control—Metrics

Security architects, security professionals, and security practitioners have to work together to determine the metrics to be used to evaluate and control ongoing risk to the organization.

Metrics include all security-related information from the assessment and monitoring of automated tools and manual procedures. Metrics are organized into meaningful information to support decision making and meet reporting requirements.

Some examples of metrics are:

  • Number and severity of vulnerabilities revealed and remediated

  • Number of unauthorized access attempts

  • Configuration baseline information

  • Contingency plan testing dates and results

  • Number of employees who are current on awareness training requirements

  • Risk tolerance thresholds for organizations

  • Risk score associated with a given system configuration

Let us discuss Security Controls Monitoring Frequencies in the next section.

Security Controls Monitoring Frequencies

Determining frequencies for security status monitoring and for security control assessments are critical functions of the organization’s ISCM program. Organizations take the following criteria into consideration when establishing monitoring frequencies for metrics or assessment frequencies for security controls:

  • Security control volatility: Volatile security controls are assessed more frequently. Information system configurations typically experience high rates of change.

  • System categorizations/impact levels: security controls implemented on systems that are categorized as high-impact are monitored more frequently than controls implemented on moderate-impact and low-impact systems.

  • Security controls or specific assessment objects providing critical functions: Security controls or assessment objects that provide critical security functions such as log management server, and firewalls are candidates for more frequent monitoring.

  • Security controls with identified weaknesses: controls having weaknesses are monitored more frequently until remediation of the weakness is complete.

  • Organizational risk tolerance: Organizations with a low tolerance for risk monitor more frequently than organizations with a higher tolerance for risk.

  • Threat information: Organizations consider current, credible threat information, including known exploits and attack patterns, when establishing monitoring frequencies.

  • Vulnerability information: Organizations consider current vulnerability information concerning information technology products when establishing monitoring frequencies.

  • Risk assessment results: Results from organizational and/or (Read as and-or) system-specific assessments of risk (either formal or informal) are examined and taken into consideration when establishing monitoring frequencies.

  • The output of monitoring strategy reviews: Security control assessments, security status metrics, and monitoring and assessment frequencies change by the needs of the organization.

  • Reporting requirements: Reporting requirements do not drive the ISCM strategy, but may play a role in the frequency of monitoring depending on the organization’s policies.

Let us discuss the benefits of ISCM in the next section.


ISCM supports organizational risk management decisions by maintaining ongoing awareness of information security, vulnerabilities, and threats. ISCM is an important step in an organization’s Risk Management Framework or RMF. It provides security-related information on demand, which enables timely risk management decisions, and also authorization decisions.

ISCM helps organizations to:

  • Move from compliance-driven risk management to data-driven risk management;

  • Take risk response decisions;

  • Obtain security status information;

  • Obtain insight into security control effectiveness,

  • Prioritize security response actions based on risk

  • Have timely reporting

Let us discuss Key Performance and Risk Indicators in the next section.

Key Performance and Risk Indicators

Key Performance Indicator or KPI is a type of performance measurement. KPIs evaluate the success of an organization or of a particular activity in which it engages. Often success is simply the repeated, periodic achievement of some levels of an operational goal such as zero defects or 10 by 10 customer satisfaction. Sometimes success is also defined regarding making progress toward strategic goals. Some parameters used in KPI are cost adherence, schedule adherence, project effort adherence.

A Key Risk Indicator (KRI) is a measure used in management to indicate how risky activity is or the possibility of future adverse impact. KRI uses mathematical formulas or models to give early warning of an event that can potentially harm the continuity of the activity/project. Identifying KRI requires an understanding of the organization's goals. Each KRI must be measurable and must accurately reflect the negative impact on the organization's KPIs.

Let us discuss internal and third-party audits in the next section.

Internal and Third Party Audits

Auditing is the on-site verification activity, such as inspection or examination, of a process or quality system, to ensure compliance with requirements. An audit can apply to an entire organization or might be specific to a function, process, or production step.

An audit is an evidence gathering process mandated by most regulations.

An audit may also be classified as internal or external, depending on the interrelationships among the participants. Internal audits are performed by employees of the organization.

External audits are performed by an outside agent. Internal audits are often referred to as first-party audits, while external audits can be either second-party or third-party.

  • First-party audits: Organizations use first-party audits to audit themselves. First party audits are used to confirm or improve the effectiveness of management systems. They are also used to declare that an organization complies with an ISO standard, in a process called self-declaration.

  • Second-party audits are external audits usually done by customers or by others on the organization’s behalf. However, they can also be done by regulators or any external party with a formal interest in an organization.

  • Third-party audits are external audits as well. However, they are performed by independent organizations such as registrars or certification bodies, or regulators.

Let us discuss Audit Frequency and Scope in the next section.

Audit Frequency and Scope

Most standards and regulations require an audit. The Federal Information Security Management Act or FISMA requires agencies to self-audit and have an independent auditor review their information security implementation at least annually.

For ISO 27001, it is mandated that an organization should conduct an internal audit every 12 months.

The information security professional must understand that while the requirements outlined in laws and standards provide protection, they are rarely sufficient to ensure full protection or risk management of an information system. He or she must ensure proper scoping and tailor to get the appropriate number of controls at the correct level for the target system.

Let us discuss Statement on Auditing Standards No. 70 in the next section

Statement on Auditing Standards No. 70

Organizations are increasingly outsourcing systems, business processes, and data processing to service providers to focus on core competencies, reduce costs, and deploy new application functionality more quickly. In today's global economy, service organizations or service providers must demonstrate that they have adequate controls and safeguards when they host or process data belonging to their customers.

Statement on Auditing Standards or SAS (Read as Sass) No. 70 for service organizations was a widely recognized auditing standard developed by the American Institute of Certified Public Accountants or AICPA.

A service auditor's examination performed according to SAS No. 70, commonly referred to as a SAS 70 Audit demonstrates that a service organization has been through an in-depth examination of their control objectives and activities, which often include controls over information technology and related processes.

The SAS 70 guided external auditors on Generally Accepted Auditing Standards or GAAS to audit a non-public company entity and issue a report.

Service organizations, like hosted data centers, credit processing organizations, and insurance claims processors, provide outsourcing services that affect the operation of the contracting enterprise and hence require an extensive audit.

The SAS 70 report retired in 2011, and the Service Organization Control reports have been defined to replace SAS 70 reports. The assurance needs of the users of outsourced services are more clearly addressed in SOC (Read as S-O-C).

Let us discuss Service Organization Controls in the next section.

Service Organization Controls

Service Organization Control Reports® are internal reports on the services provided by an organization. They provide valuable information that the users need to assess and address risks associated with an outsourced service.

SOC Reports are designed to help service organizations and organizations that operate information systems and provide information system services to other entities, build customer trust and confidence in their service delivery processes and controls through a report by an independent Certified Public Accountant or CPA It is a series of accounting standards that measure the control of financial information for a service organization. Each type of SOC report is designed to help service organizations meet specific user needs.

Types of SOC reports are SOC 1 Report, SOC 2 Report, and SOC 3 Report.

Let us discuss SOC 1 Reports in the next section.

SOC 1 Report

This is also called the Report on Controls at a Service Organization Relevant to User Entities’ Internal Control over Financial Reporting. This category of reports prepared by the Statement on Standards for Attestation Engagements or SSAE No. 16, is an enhancement to the previous standard for Reporting on Controls at a Service Organization, the SAS 70.

Reporting on Controls at a Service Organization are specifically intended to meet the needs of the user entities that use service organizations and the CPAs that audit the user entities’ financial statements, or user auditors.

They help in evaluating the effect of the controls at the service organization on the user entities’ financial statements. User auditors use these reports to plan and perform audits of the user entities’ financial statements.

There are two types of reports for these engagements:

  • Type 1 report evaluates and reports on the design of controls put into operation as of a certain date.

  • Type 2 report includes the design and testing of controls to report on their operational effectiveness over a period of time, typically six months.

Use of these reports is restricted to the management of the service organization, user entities, and user auditors.

Let us discuss SOC 2 reports in the next section.

SOC 2 Report

This is a Report on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality or Privacy.

A SOC 2 report has the same options as the SSAE 16 report where a service organization can decide to go under a Type 1 or Type 2 audit.

However, unlike the SSAE 16 audit, which is based on internal controls over financial reporting, the purpose of a SOC 2 report is to evaluate an organization’s information systems relevant to security, availability, processing integrity, confidentiality or privacy.

The criteria for these engagements are contained in the Trust Services Principles Criteria and Illustrations. Organizations that are asked to provide an SSAE 16, but do not have an impact on their client’s financial reporting, should select this option.

These reports can play an important role in:

  • Oversight of the organization

  • Vendor management programs

  • Internal corporate governance and risk management processes,

  • Regulatory oversight

Similar to SOC 1 report there are two types of reports:

  • Type 1 is a report on management’s description of a service organization’s system and the suitability of the design of controls.

  • Type 2 is a report on management’s description of a service organization’s system and the suitability of the design and operating effectiveness of controls.

Use of these reports is generally restricted and is at the discretion of the auditor using the guidance outlined in the standard.

Let us further discuss SOC 2 reports in the next section.

SOC 2 Reports (contd.)

SOC 2 is based on Trust Principles which are modeled around four broad areas: Policies, Communications, Procedures, and Monitoring. The Principles and Criteria are jointly set by the AICPA and Canadian CPAs.

The Trust Services Principles are:

  • Security: The system is protected against unauthorized access, use or modification, both physical and logical.

  • Availability: The system is available for operation and use as committed or agreed.

  • Processing Integrity: System processing is complete, valid, accurate, timely, and authorized.

  • Confidentiality: Information designated as confidential is protected as committed or agreed. Particularly applies to sensitive business information.

  • Privacy: The system’s collection, use, retention, disclosure, and disposal of personal information meet commitments in any privacy notice, and the Generally Accepted Privacy Principles or GAPP (Read as Gap).

Let us discuss SOC 3 report in the next section

SOC 3 Report

The third type of report is SOC 3 Report that is, the Trust Services Report for Service Organizations.

These reports are designed to meet the needs of users who need assurance about the controls at a service organization that affect the security, availability, and processing integrity of the systems used by a service organization to process users’ information, and the confidentiality, or privacy of that information, but do not have the need for or the knowledge necessary to make effective use of an SOC 2 Report.

Unlike SOC 1 and SOC 2 reports, SOC 3 reports can be freely distributed.

Let us compare the SOC reports in the next section.

SOC 1, SOC 2, and SOC 3 Comparison

The table below shows a comparison of the three SOC reports.

SOC1 Reports are based on the audit of financial statements and is used by the management of service organizations, user entities, and user auditors. It covers controls relevant to user entity financial reporting.

SOC2 Reports covers governance, risks, compliance programs, oversight, and due diligence. These reports are used by the management of service organization and user entities, regulators and others. It addresses concerns about system security, availability, processing integrity, and confidentiality or privacy.

SOC 3 Reports are created for marketing purposes where details are not required. It is used by users with a need for confidence in security, availability, processing integrity, confidentiality, or privacy of a service organization’s system.

Let us discuss the audit process in the next section.

Audit Process—Audit Preparation Phase

For SOC 2 and SOC 3 examination, the audit process has two phases: the Audit Preparation Phase and the Audit Phase. In the Audit preparation phase, security practitioners have to collaborate with the service providers and define the audit scope and overall project timeline.

Other activities in this phase include:

  • Identifying existing or required controls through discussions with management, and review of available documentation

  • Performing readiness review to identify gaps requiring management attention

  • Communicating prioritized recommendations to address any identified gaps

  • Holding working sessions to discuss alternatives and remediation plans

  • Verifying that gaps have been closed before beginning the formal audit phase

  • Determining the most effective audit and reporting an approach to address the service provider’s external requirements

The Audit Preparation Phase sets the stage for the next phase, the Audit Phase, which we will discuss in the next section.

Audit Process—Audit Phase

The Audit Phase is where the auditor builds on the understanding of the service provider’s architecture and controls established in the Audit Preparation Phase.

In the Audit Phase, the auditor provides overall project plan, completes advance data collection before on-site work to accelerate the audit process, and conducts on-site meetings and testing.

Other activities include completing off-site analysis of collected information, conducting weekly reporting of project status and identified issues, preparing a draft report for management review and electronic and hard copies of the final report, and presenting an internal report for management containing overall observations and recommendations for consideration.

Business Scenario

Nutri World Inc., to focus on core competencies and reduce costs, is outsourcing systems, business processes, and data processing to service providers.

Hilda Jacobs, IT Head, was updating the process to manage the risks associated with outsourcing and also for monitoring their outsourced vendor relationships.

Question: Due to the confusion and misuse of SAS 70, the AICPA replaced it with which framework?

Answer: AICPA replaced SAS 70 with Service Organization Controls (SOC) Reports.

Curious about the CISSP course? Watch our Course Preview for free!


Here is a quick recap of what we have learned in this domain:

  • Security assessment and testing maintains an information system’s ability to deliver its intended functionality securely by evaluating the information assets and associated infrastructure.

  • Various tools and techniques are used to identify and mitigate risk due to design flaws, architectural issues, hardware and software vulnerabilities, configuration errors, coding errors, and any other weaknesses.

  • Security policies and procedures are uniformly and continuously applied.

  • The security professional should be capable of validating assessment and test strategies and carry out those strategies using various techniques.

  • In the absence of careful analysis and reporting of assessment results, the security assessment and testing have little value.


This concludes the lesson Security Assessment and Testing. The next lesson is ‘Security Operations.’

Find our CISSP®- Certified Information Systems Security Professional Online Classroom training classes in top cities:

Name Date Place
CISSP®- Certified Information Systems Security Professional 17 May -7 Jun 2021, Weekdays batch Your City View Details
CISSP®- Certified Information Systems Security Professional 28 May -19 Jun 2021, Weekdays batch Atlanta View Details
CISSP®- Certified Information Systems Security Professional 5 Jun -27 Jun 2021, Weekend batch Washington View Details
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Request more information

For individuals
For business
Phone Number*
Your Message (Optional)
We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Work Email*
Phone Number*
Job Title*