Cloud Network Security Tutorial

5.1 Security

Hello and welcome to the module of security of the CompTIA Cloud Plus course offered by Simplilearn. This module introduces the concept of network security. In this module, we will cover the best practices for implementing security in cloud production environment. Let us look into the objectives of this module in the next slide.

5.2 Objectives

By the end of this module, you will be able to: Discuss network security concepts List different encryption technologies and methods Discuss various access control methods Discuss guest and host hardening techniques Let us start our discussion of the terminologies commonly used while dealing with network security.

5.3 Network Security Terminologies

Network security deals with protecting the computer network from any type of unauthorized access. The important terminologies in network security are: Access Control List De-Militarized Zone Virtual Private Network Intruders Intrusion Detection System Distributed Denial-Of-Service Ping of Death Ping Flood Attack In the next slide, let us understand the first terminology which is Access Control List.

5.4 Access Control List

Network security is one of the most significant steps to be initiated in a production environment. It takes care of authorization, authentication, and accounting. Access Control Lists (ACLs) are the network filters used by some switches and routers to allow and limit the flow of data in and out of network interfaces. They protect high-speed interfaces where the line rate speed is significant and the firewalls are constricting. ACLs limit the updates from the network peers for routing. They define the flow control for the network traffic. To filter the traffic against some known vulnerable protocols and less desirable networks, ACLs should be placed on external routers. ACLs for routers offer a noteworthy amount of firewall capability. During the configuration of an ACL on an interface, the network device examines the data passed through the interface, compares it with the described criteria in the ACL, and permits or prohibits the data to flow. Setting up a DMZ or de-militarized zone in the network is one of the most common methods in network security. To implement DMZ architecture, two separate network devices are used. We will discuss this in the next slide.

5.5 De Militarized Zone

In networking, De-Militarized Buffer Zone (DMZ) is named after the de-militarized zones, the prohibition area for the military of either countries. DMZ, a secure server, provides an additional layer of security to the network that works as a barrier between a local area network (LAN) and a less secure network, like the Internet. It provides secure services to LAN users who require the Internet for web applications, FTP, email, and other applications. We will continue our discussion on DMZ in the next slide.

5.6 De Militarized Zone (contd.)

As it provides buffer between LAN and the Internet, the DMZ management server network is considered a less secure network compared to the internal network. Therefore, few additional security measures are taken for a DMZ host, which include disabling unnecessary services, running the necessary services with the reduced privileges, eliminating any unnecessary user account, and ensuring the DMZ has the latest security patches and updates. Though few computers have access to the DMZ, it will compromise the entire network in case there is a security breach. It would be otherwise, if the entire network could be accessed through the Internet. Most IT professionals place systems in DMZ which can be accessed from outside, for example, web servers, DNS servers, and VPN systems or remote access. Next, we will discuss DMZ configuration.

5.7 De Militarized Zone Configuration

The steps to configure DMZ is given on the slide. The external router provides access to connections outside the network. The DMZ internal router contains restrictive ACLs that protect the internal network from well-defined threats. ACLs are often configured with explicit permit and they deny the statements for specific addresses and protocol services. The ACLs are less restrictive, however, it provides large protection access blocks to global routing table areas that is to be restricted. It protects from well-known protocols that provide access to the network or outside the network. In addition, ACLs should be configured to limit the peer access network and can be used in combination with the routing protocol. This should be done to limit the extent of routes that the network peers received or sent and to confine the number of updates. In the following slide, we will learn about another network security concept – VPN.

5.8 Virtual Private Network

A Virtual Private Network (VPN) uses a public connection, the Internet, to provide access to remote offices and users to the network of their organization. To access public network as a private network, it employs the mechanism of authentication, encryption, and integration protection. VPN can connect distant networks of an organization, for instance, it can allow the employee travelling abroad access the organization’s intranet, i.e., private network, remotely. It can also create a private network over the public network, for instance, the Internet. This depends on the use of virtual connections, which are temporary and do not have any presence, made of packets. Let us understand the mechanism of VPN through an example illustrated in the next slide.

5.9 Virtual Private Network (contd.)

The image illustrates that an organization has two networks: Network 1 and Network 2. These networks should be physically separate from each other and the user needs to connect the networks using VPN. In such case, the user sets up two firewalls, Firewall1 and Firewall2, for encryption and decryption purposes. The Network 1 connects to the Internet via Firewall1, and Network 2 connects to the Internet via Firewall2. Note—the two firewalls are virtually connected to each other through the Internet with the help of a VPN tunnel. Let us understand how the VPN protects the traffic that passes between two hosts on different networks. Assume that host X of Network 1 sends a data to host Y of Network 2. The host X creates a packet, inserts its IP address as the source address and IP address of host Y as the destination address. The packet reaches Firewall that adds new headers to it. In the new header, Firewall alters the source address to that of Firewall 1, say, FA1. Similarly, it alters destination address of host Y to that of Firewall 2, say, FA2. It encrypts and authenticates the packet, as per the settings and forwards the modified packet over the Internet. The packet reaches to Firewall 2 via one or more routers. The Firewall 2 then discards the outer header and performs decryption and other cryptographic functions as per the requirement. This in turn gives the original packet, created by host X. While going through the packet contents, Firewall 2 realizes packet was addressed to host Y. Therefore, it delivers the packet to host Y. Next, we will discuss the objectives and classes of intruders.

5.10 Intruders

The general meaning of an intruder is a person who enters the territory that does not belong to him/her. An intruder’s aim is to access the system or increase the range of privileges accessed on a system. It is in fact the most publicized of all threats to security. The intruder attack may or may not harm the infrastructure. However, it is the responsibility of a network administrator to perform and configure the monitoring system, which can detect any type of intrusion. This is where an Intrusion Detection System (IDS) comes into picture. In the next slide, we will continue the discussion on network security concepts and learn in detail about IDS. In the next slide, we will discuss the types of intruders.

5.11 Types of Intruders

There are three classes of intruders which are explained as follows: The first is the masquerader: Masquerader is the one who seeks unauthorized access to the system to use a legal user’s account. The second intruder is the misfeasor: Misfeasor is a legitimate user who seeks access to areas that are unauthorized to him/her, misusing the privileges. The third type of intruder is the clandestine or a secret user: Clandestine is a user who can exert managerial control over the system and use it to avoid access and auditing controls or to suppress the collection of audit. Usually, the masquerader is an outsider and the misfeasor is an insider. Clandestine can be either an insider or an outsider. We will learn about an Intrusion Detection System in the next slide.

5.12 Intrusion Detection System

An Intrusion Detection System (IDS) is responsible for inspecting all outbound and inbound activities. It identifies any doubtful pattern when a system or a network is attacked by someone who tries to break in. It performs a variety of functions, which include monitoring the user and system activities, auditing system configurations for vulnerabilities and misconfigurations, accessing the integrity of critical system and data files, recognizing known attack patterns in the system activity, and identifying abnormal activities through statistical analysis. In the following slide, we will discuss the types of IDS.

5.13 Types of Intrusion Detection System

There are two types of IDS: Host based IDS (HIDS) and Network Based IDS (NIDS) which will be covered in the subsequent slides. Let us begin with the first type, i.e., HIDS, in the next slide.

5.14 Host Based Intrusion Detection System

A Host-Based Intrusion Detection System (HIDS) checks on the log files, audit trails, and network traffic that enter or exit the host. HIDS can operate both in real time as the activity arises and in batch mode by checking on a periodic basis. Generally, host-based systems are self-contained, that is, every essential element that needs to be secured is available. Many new commercial products are designed to both report and be managed by a central system. Local system resources are used by these systems to operate. In the following slide, we will discuss between the old versus new HIDS.

5.15 Old versus New Host Based Intrusion Detection System

The table on the slide distinguishes between old and new host-based intrusion detection system. The older versions of HIDS were operated in batch mode. It looked for suspicious activities in particular events in the system’s log files on an hourly or daily basis. However, in the new versions of HIDS, the processor speed is increased and IDS performs a real-time check on the log files. They also have the ability to examine the data traffic, which is generated and received by the host. Many HIDSs focus on the log files or the audit trails, which are produced by the local operating system. The examined logs are the application, system, and security event logs on the Windows systems. Whereas on the UNIX system, the examined logs are usually the message, kernel, and error logs. Some HIDSs have the ability to consistently monitor, either on a single application (payroll application) or on a dedicated protocol (FTP or HTTP, etc.). HIDSs search activities in the log file, like, use of certain programs; login authentication failure; adding new user account; modification or access of critical system files; logins at odd hours; starting or stopping processes; privilege escalation; and modification or removal of binary files. Next, we will look into the components of HIDS.

5.16 Components of Host Based Intrusion Detection System

As shown in the image, HIDS contains various components as follows: Critical files contains information references about the critical files, for example, OS files. Log files are generated as a report by the Host Operating System. Network traffic component accepts the network traffic. Traffic collector is used to collect the traffic that is generated from any of the three components, that is, log files, critical files, and network traffic. Traffic analyzer is used to analyze the traffic and find the signature, which is then matched with the signature database. The signature database contains all malicious signatures. It is the heart component of the IDS. If any signature matches with the signature database, the traffic analyzer triggers the alarm, flashes the warning box in the user interface, and maintains the log of the threat. Threat is usually stored in the report. Next, we will look into NIDS.

5.17 Network Based Intrusion Detection System

Network-Based Intrusion Detection System (NIDS) monitors the network traffic that interconnects the system, such as the bits and bytes travelling along the cables and wires. It analyzes the traffic as per the protocol, destination, type, source, content, amount, traffic already seen, and so on, that occurs quickly. To be effective, the IDS handles the traffic at speed on which the network operates. Network-based IDSs are generally deployed so that they can monitor traffic in and out of the organization’s major links, like, connections to the Internet, remote offices, partner, etc. NIDS searches activities like, malicious content in the data payload of packet/packets; vulnerability scanning; Trojans, viruses, or worms; denial of service attacks; tunneling; port scans; and brute-force attacks. We will discuss NIDS layout in the next slide.

5.18 Network Based Intrusion Detection System Layout

The image on the slide illustrates the logical layout of NIDS. The working of NIDS is similar to that of HIDS; however, it does not scan critical files or log files. Let us discuss the Distributed Denial-of-Service (DDoS) in the next slide.

5.19 Distributed Denial Of Service

A Distributed Denial-of-Service (DDoS) attack affects the availability of the services to the legitimate user. As shown in the image, the attacker prevents the efficient functioning of the internet site or a service. This can be set either on a temporary basis or for an indefinite period. The motto is to affect the reputation of the service provider. This malfunctioning occurs by collecting all the zombies present in the public network. The zombies are the vulnerable computers in the public network that are breached by the attacker and are in complete control. The attacker then puts forward a request to perform bombing of unlimited packets in the victim’s computer through the zombies. Since the attack is performed simultaneously by multiple zombies, it is called distributed denial-of-service attack. Next, we will discuss ping of death attack.

5.20 Ping of Death

Ping of death is a type of attack for denial-of-service, where the attacker sends a packet of size larger than 65536 bytes. Earlier, an operating system could not handle larger packets. Therefore, the attacker would deliberately create a packet of larger size and send it to the target system. Consequently, the operating system would either hang or be restarted. This in turn resulted in the system downtime. However, nowadays all the latest operating systems and NIC cards are being patched for such type of attack. In the following slide, we will discuss ping flood attack.

5.21 Ping Flood Attack

In the ping flood attack, the attacker sends a huge number of "ICMP Echo Requests" to the victim. Here, ICMP stands for Internet Control Message Protocol. Ping flood attack is supported by many ping utilities and the attacker does not have to be highly knowledgeable about it. Since it overloads the network links, it is damaging for both attacker and the victim, unless the attacker has link faster than that of the victim. Filtering the incoming packets can help if the Ethernet speed is flooded. If the speed is slow, the options are less; you can either hang up or reconnect with a different IP address. To filter the incoming ICMP Echo Request packets, the victim can use a firewall. This allows the computer to refuse sending ICMP Echo Reply packets. We will explore some of the best security practices in the next slide.

5.22 Best Security Practices

The best security practices are as follows: Review and audit logs are generated in the server to hunt malicious or suspected activity. As an admin, you need to ensure the encryption of data during storage and transit. This can be implemented using methods like VPN, etc. Obfuscation is a practice of using defined pattern to mask a sensitive data. Obfuscation protects the data stored in the physical storage device. Zoning controls access from isolation of a single server to a group of storage devices or a single storage device and one node to the other. It also associates one or more storage devices with a set of multiple servers. LUN masking provides detailed security than zoning as LUNs allow sharing of storage at the port level. We have learned about the various network security concepts and best practices, so far. In the subsequent slides, we will move on to understand the encryption technologies and methods.

5.23 Encryption Technologies and Methods

Encryption is a process wherein information is coded. This is to ensure that only authorized individuals will be able to read the data contained. The various technologies and methods used for encryption are: Cryptography, Cipher, Public Key Infrastructure, IPSec, SSL Protocol, and TLS Protocol. We will begin with cryptography in the next slide.

5.24 Cryptography

Cryptography is the science of converting the human-readable data into a scrambled data using a pre-defined algorithm and vice-versa. Cryptography provides confidentiality, integrity, and non-repudiation of data. Confidentiality refers to the data being confidential and read only by the intended recipient. Integrity refers to the data not being modified or fabricated during the data exchange process between the sender and the intended receiver. Non-repudiation refers to the proof of the sender being the genuine intended sender. Next, we will discuss the types of cryptography.

5.25 Types of Cryptography

Cryptography can be classified into three types; they are secret-key cryptography, public-key cryptography, and hashing. Secret-key cryptography is also referred to as symmetric-key cryptography. In this, only one key is used for encryption and decryption process. Public-key cryptography is also called as asymmetric-key cryptography. In this, two keys are used, one key is used for encryption, and the other key is used for decryption process. The two keys are referred to as public key of sender, which is used for encryption; and private key of sender, which is used for decryption. Hashing is also called as one-way encryption. This is basically used to check the integrity of the data, by checking the integrity of the decryption. In the next slide, we will learn about the differences between plain text and cipher text.

5.26 Cipher Text versus Plain Text

Cipher is an algorithm in cryptography that performs encryption and decryption. In a non-technical usage, a ‘cipher’ is similar to a ‘code’; however, the concepts vary in cryptography. Before we discuss encryption and decryption, let’s understand the meaning of a plain text and a cipher text. Plain text is a message or a file that is human-readable, whereas a cipher text is a message or a file that is encrypted. Technically, the process of encoding plain text message into cipher is known as encryption and the reverse is known as decryption. In communication, the computer forwards the encrypted message from the sender’s end, which is received over a network, for instance, the Internet, by the receiver. The receiver’s computer in turn decrypts the message to obtain the original plain text message. To encrypt the message, sender applies encryption algorithm and to decrypt the message recipient applies decryption algorithm. We will look into some of the examples of cipher in the following slide.

5.27 Examples of Cipher

Data Encryption Standard (DES) is also called as Data Encryption Algorithm (DEA). It was developed in 1977, by National Bureau of Standards (NBS) (US Government) for the secure transmission of data within the systems. DES is a block cipher which uses a private key or a secret key for encryption. There are 72 quadrillion or more possible keys that can be used for this process. Like other private key encryption, both the sender and the receiver must know and use the same private key. Triple DES is an extended version of DES, with a difference that the cipher text generated first time, after using DES, is re-accepted as the input for cipher text generation for two more times. It makes the process far more secure. Digital Signature Algorithm (DSA) is a standard for digital signature proposed by National Institute of Standards and Technologies and is adopted by FIPS (Federal Information Processing Standard). Advanced Encryption Standard (AES) is another type of secret key cryptography. There are three types of algorithms in AES; they are AES128, AES192, and AES256. AES is created to remove some flaws that were found in the DES algorithm. Later, this algorithm was accepted as standard for defense communications. Rivest-Shamir-Adleman (RSA) algorithm is a public key cryptography where the keys for encryption and decryption are different. The sender uses the receiver’s public key to encrypt the message and the receiver uses his/her private key to decrypt the message. RC4 is a type of cipher that Rivest created for the purpose of securing data. RC4 is based on total random permutation of the data to be protected. It is used for file-encryption products, like password managers. RC5 is a type of cipher, which has total block size of 32, 64, or 128 bits unlike RC4. RC5 uses the concept of data-dependent rotations for performing cryptographic operations. We will discuss Public Key Infrastructure in the next slide.

5.28 Public Key Infrastructure

A Public Key Infrastructure (PKI) is made up of different components like hardware, applications, policies, services, programming interfaces, cryptographic algorithms, protocols, users, and utilities. Such components work together and allow communications using public key cryptography and symmetric keys for digital signatures, data encryption, and integrity. There is no need of constructing and implementing a PKI application and protocol because the same type of functionality is provided by different application and protocols. We will continue our discussion on PKI.

5.29 Public Key Infrastructure (contd.)

The image illustrates the public key infrastructure. It is similar to that of issuing Social Security Number. It is issued by the formal authority called Social Security Administration. When User A applies for issuing certificate, the registration authority will ask for a proof of identity and validate it. Once the proof has been validated, it requests certification authority for issuing the certificate. The certificate authority will use the private key to sign the certificate digitally. When User B receives User A’s certificate and verifies that it was signed digitally by a certificate authority that he or she trusts, then he or she will believe the certificate to be valid and not because he or she trusts User A. This is referred to as a third-party model. The process allows User A to authenticate himself to User B and communicate with User B through encryption process without prior communication or a pre-existing relationship. Once User B is convinced of the legitimacy of User A’s public key, he or she can use it to encrypt and decrypt messages between himself or herself and User A. The job role of Validation Authority is to validate the user in the public key infrastructure and provide the result to the user who needs verification. Let us next discuss IPSec.

5.30 IPSec

IPSec (Internet Protocol security) is a set of protocols developed by the Internet Engineering Task Force (IETF) for the secure exchange of packets at the network layer of the OSI model. This protocol works only in combination with IP networks. It is possible to tunnel across other networks at lower levels of the OSI model, once an IPSec connection is established. The set of security services, which is provided by IPSec, takes place at the network level of the OSI model. Because of this, the higher-level protocols, like TCP, UDP, BGP, etc., are not affected by the implementations of IPSec services. The IPSec protocol is designed to provide a comprehensive array of services; but it is not limited to connectionless integrity, access control, rejection of replayed packets, traffic-flow confidentiality, data origin authentication and data security, etc. In the following slide, we will learn about the two modes of IPSec.

5.31 IPSec Modes

There are two modes of IPSec. They are transport mode and tunnel mode. The transport mode method encrypts the data portion of the packet only, which enables an outsider to see the source and destination IP addresses. Protection of data portion of the packet is referred to as content protection. The tunnel mode protects source and destination IP addresses as well as data. Though this provides the greatest security, it is possible only between IPSec servers, since the final destination needs to be known for delivery. Protecting header information is known as target protection. Different security levels are provided by these methods. It is possible to use both methods at the same time, such as, using transport within one’s own network to reach an IPSec server, and using the transport method from the target network’s IPSec server to the target host. We will discuss the SSL protocol in the next slide.

5.32 SSL Protocol

SSL was first developed by Netscape. Later, they gained popularity among other companies like Microsoft and became a compulsory standard until TLS (Transport Layer Security) was evolved. SSL uses public key infrastructure concepts. It uses a program layer which is located between the Internet’s Transport Control Protocol (TCP) and Hypertext Transfer Protocol (HTTP) layers. Netscape’s SSL Ref program library can enable any web server. It is available for download for non-marketable use or is licensed for commercial use. SSL protocol is used to provide a secure communication interface between the user’s web browser and the server. It provides two levels of security services, namely, authentication and confidentiality. The main feature of providing secure communication helps in maintaining the confidentiality of the data. It verifies the user by performing the authentication process. Next, we will discuss two important concepts of SSL.

5.33 Concepts of SSL

Following are the two important concepts of SSL: First is the SSL Connection: It is a transport, which provides a suitable type of service, for instance, peer-to-peer relationships. SSL connections are transitory and each connection is related with a single session. Second concept is the SSL Session: In this session, a client and a server are associated. They are created by Handshake Protocol. SSL Session defines a set of cryptographic security parameters, which can be shared among multiple connections. Sessions are used to avoid repeated authentication to maintain connectivity, since repeated authentication requires high bandwidth of network, which may result in slower service. We will learn more about the TLS protocol in the next slide.

5.34 TLS Protocol

Transport Layer Security (TLS) protocol, the successor to the Secure Sockets Layer (SSL), ensures privacy between collaborating applications as well as their users on the web. When a client and a server communicate, TLS ensures any third party does not pry or interfere with any message. TLS comprises two layers: the TLS Handshake Protocol and the TLS Record Protocol. The TLS Handshake Protocol allows authentication between the client and the server. It also allows negotiation with cryptographic keys and an encryption algorithm prior to data exchange. The TLS Record Protocol secures connection with encryption method, such as, the Data Encryption Standard (DES). It can be used without encryption. The TLS protocol is based on Netscape’s SSL 3.0 protocol; however, SSL and TLS are not interoperable. The TLS protocol does not contain a mechanism that allows TLS implementation to back down to SSL 3.0. TLS is supported by the recent browser versions. TLS is an Internet Engineering Task Force (IETF) standardization initiative whose goal is to produce an SSL internet standard version. In the following slide, we will discuss Discretionary Access Control Methods.

5.35 Discretionary Access Control Method

The DAC method was formerly used by the military to describe the two approaches to control system access. One approach is user-based and the other is machine-based. User-based approach refers to the access right, mapped with the individual’s assigned username. Machine-based approach refers to the access right mapped with either IP address or MAC address of the machine. Thus, in machine-based approach, the access is given with respect to the machine identification and not user identification. It is a preferred practice in DAC to follow user-based approach. DAC is meant to limit the access to objects; it is based on the identification of groups or subjects to which they belong. The controls are discretionary. "If a system has discretionary access control, the owner of an object decides on the subjects that will have access, also the specific access they will be given." The permission used in UNIX-based systems is the common method to accomplish this. The owner of a file can specify what permissions (read/ write/ execute) can be given to the members of the same group and the permissions all others may have. We will learn more about the second method of access control, that is, MAC in the next slide.

5.36 Mandatory Access Control

Like DAC, Mandatory Access Control (MAC) was originally used by military to describe the two approaches used to control the access an individual had on a system. MAC system is used in environments where different levels of security classifications exist; and is more restrictive of user independence to work. MAC restricts the access to objects based on the sensitivity of the information contained in the objects as well as the formal authorization of subjects. In MAC, the operating system decides whether to grant access to another subject, not the owner or subject. In this system, the security mechanism controls how the objects are accessed. This access cannot be changed by any individual subject. Here, the label attached to each object and subject is the key. The label will identify the classification level for that specific subject and object that is entitled to. We will discuss the third method of access control, that is, RBAC in the next slide.

5.37 Role Based Access Control

In RBAC, instead of being assigned permissions for specific actions for the objects associated with the computer system or network, the user is assigned a set of responsibilities that he or she is expected to perform. The roles are sequentially assigned with access authorizations, which are required to perform the tasks related with the role. Thus, permissions to the objects in terms of their specific duties will be granted to the users. The advantage of RBAC is it encapsulates all the access needed by an entity in one set, called ‘role’. This makes it easier to establish or remove access for an entity as the specific access needs are easily identified. RBAC does not provide much flexibility. A certain entity is bound to the access provided by the role they are in. The access needs of an entity is most often determined by several exceptions. It rarely occurs that large entity groups need the exact access. We will look into other access control methods in the next slide.

5.38 Other Access Control Methods

The other access control methods are multifactor authentication, single sign-on, and federated server, which will be covered in the subsequent slides. Let us begin with multifactor authentication.

5.39 Multifactor Authentication

Multi-factor authentication is a security system that verifies the validity of a transaction. It is more than one form of authentication. It is an important function within the organization since it protects the users’ identities, secures corporate network accesses, and ensures the users’ authenticity. The aim of multifactor authentication is to increase the level of security, since more than one mechanism would have to be spoofed for an unauthorized individual to gain access to a computer system or network. Let us discuss the single sign-on method in the next slide.

5.40 Single Sign On Method and Federated Server

Single sign-on is a feature introduced in the cloud in which a user can use the credentials provided by the federated server and can access the multiple service providers’ website, without officially registering in the website. Nowadays, almost in all the updated websites, the user can authenticate himself or herself with the help of their Google accounts or Facebook accounts. Let’s take a look at the website: as an example. This website provides the user with an option to login by using the credentials of Google or Facebook or Twitter accounts. The intermediate server, which provides token to the user to access service provider’s website directly is called federated server. One of the practical examples is the use of open-id. Open-id is the name of the product which is free to be used by the web developers. Open-id enables the APIs to perform single sign-on process for the developer’s website. You can get details on open-id on the website In the next slide, we will see the single sign-on and federated server.

5.41 Working of the Single Sign On and Federated Server

The working of the single sign-on and federated server is shown in the image on the slide. First, the user sends a request to the service provider to authenticate by using other service provider’s credentials. Next, the service provider redirects the user to the associated federated server to get a token. The federated server provides the login page where the user provides his or her credentials. After that, the federated server validates the user’s credentials and provides token with respect to the service provider. The user then tries to access service provider through the token that he or she has received. The service provider further verifies the token from the federated server. Once the federated server sends a positive feedback, the user gets the access to the service provider server. In the next slide, we will move on to understand the guest and host hardening techniques.

5.42 Guest and Host Hardening Techniques

Hardening refers to providing security to the systems. To avoid any malicious activity from the attackers, it is the best way to secure the servers. Though it is impossible to make any system 100% secure, this technique will reduce the possibility of high security threat to a system. The first technique is to disable all the unwanted ports. The ports are the entry points to the system. It is recommended that only those ports which are essential should be kept open while others should be closed. However, there are situations where a system administrator cannot close a free port. In such situations, use the second technique. In the second technique, the users can use the firewalls to monitor the traffic. The firewall can perform various activities like deciding what should be denied and what should be accessible. The firewall can monitor the system based on ports, services, protocols, signature patterns, etc. The next method is changing the default passwords. Whenever an operating system or software is developed, the developer may create dummy or default accounts for beta testing. However, when a user installs the software, he or she is in a situation where he or she is not aware of the default account. The attacker initially tries to use the default passwords to bypass the credentials. Therefore, it is ideal to disable all the default accounts. The next method is using antivirus software. The antivirus software helps the system to run smoothly and also to determine if any malicious program is running. The next method is the patching process. It is essential that the operating system and software must be patched and updated regularly. This helps a system to stay protected with all the threats that have been found after the release of software or operating system. The next method is disabling default user accounts. Almost every software and operating system will have default user accounts to provide simplicity to the customer. However, these default accounts may be used by an attacker to compromise the system. Thus, it is recommended to deactivate all the default accounts for more security. The last method is to enable user and host authentication when it comes to storage and compute. This will ensure the data or information to be passed to the client only when they are authenticated with user-specific and device-specific codes. Let us move on to the quiz questions to check your understanding of the concepts covered in this module.

5.44 Summary

Here is a quick recap of what was covered in the module: Access control methods are DAC (Discretionary Access Control), MAC (Mandatory Access Control), and RBAC (Role Based Access Control). VPN is used to create a secure private channel using public internet. Intrusion Detection System (IDS) is used to detect any intrusion activity within the perimeter. IDS are of two types: Host-based IDS and Network-based IDS. Single sign-on feature enables a user to authenticate themselves without registering to the service provider’s website.

5.45 Thank You

In the next module, we will discuss client-level virtualization and creating virtual machines in detail.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Request more information

For individuals
For business
Phone Number*
Your Message (Optional)
We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Work Email*
Phone Number*
Job Title*