Identity and Access Management Tutorial
1 Domain 05—Identity and Access Management
Hello and welcome to Domain 5 of the CISSP certification course offered by Simplilearn. This domain provides an introduction to the Identity and Access Management. Let us explore the objectives of this domain in the next screen.
After completing this domain, you will be able to: ● Explain how to control physical and logical access to assets. ● Discuss how to manage identification and authentication of people and devices. ● Explain how to implement and manage authorization mechanisms. ● Discuss how to prevent or mitigate access control attacks. Let us begin with a scenario highlighting the importance of Identity and Access Management in Information Security in the next screen.
3 Importance of Identity and Access Management in Information Security
Kevin received an email from Sergei Stankevich, the project manager of the Firewall division. The mail stated that as a part of the strong focus on security that financial year, Nutri Worldwide Inc. would perform two cycles of security audits instead of one. The following processes would be audited with rigor during the year: • Access Controls • Access Control Implementation • Access Control Monitoring Mission Statement: Protecting networks, applications, and data from attack is of utmost importance. This will be achieved by: • Auditing current security practices, policies, and processes Auditing current security practices, policies, and processes to suggest improvements that can be implemented. • Examining and authenticating security through penetration testing and vulnerability assessments. Let us discuss the concepts of Controlling Physical and Logical Access to Assets in the following screen.
4 Controlling Physical and Logical Access to Assets
A security practitioner must understand the concepts of controlling physical and logical access to assets. Access controls help protect against threats and mitigate vulnerabilities by reducing exposure to unauthorized activities, and providing access to information and systems to only authorized people, processes, or systems. Access control covers all aspects of an organization. A few benefits of access control are as follows: • Information Systems: Multiple layers of access controls are used to protect against compromise and damage to the systems, along with the information they contain. • Facilities: Various access controls protect and prevent entry and movement around the organization’s physical locations to protect personnel, information, equipment, and other assets of the organization. • Personnel: Access controls ensure that only legitimate people with certain privileges and associated with the organization can interact with others in the organization. The personnel can include management, end users, customers, business partners, and almost anyone associated with the organization. Let us continue discussing the concepts of Controlling Physical and Logical Access to Assets in the following screen.
5 Controlling Physical and Logical Access to Assets (contd.)
A few other benefits of access control are as follows: Support Systems: Access control avoids compromise of the support systems such as power, fire suppression controls, water, and Heating, Ventilation and Air Conditioning or HVAC systems by any malicious entity, which may hamper the ability to support critical systems and can cause harm to the organization’s personnel. Logical Access Controls are protection mechanisms that limit users' access to information and restrict their forms of access on the system to only what is appropriate for them. They are generally built into the operating system. Some of the common access control modes include the following: • Read Only: This provides users with the capability to view, copy, and print information. However, alterations are not allowed such as delete from, add to, or modify. Read Only accesses are probably the most widely allowed data files on IT systems. • Read and Write: Users are allowed to view, add, delete, modify, and print information. Logical Access Control can further refine the read or write relationship so that a user has read-only ability for one field of information and the ability to write to a related field. • Execute: The most common activity performed by users in relation to applications programs on a system is to execute them. Users execute a program each time they use a word processor, spreadsheet, database, and so on. Logical Access Controls are challenging and complex to administer. In the next screen, we will discuss Access, Subject, Object, and Access controls.
6 Access Subject Object and Access controls
In this screen, we will define the terms Access, Subject, Object, and Access controls. Access is the transfer of data between subjects and objects. Let us look at an example of accessing web server. When a program accesses a file, the program is the subject and the file is the object. As the subject is always the entity that receives data from the object and can also alter that data, it must be identified, authenticated, authorized, and held accountable for its actions. A subject is an active component that needs access to an object or the data within an object. The subject can be a user, program, or process that accesses an object to accomplish a task. An object is a passive entity that contains data or information. It can be a computer, database, file, program, directory, or database table field. Access Controls are the security features that control how users and systems communicate and interact with other systems and resources. An example of Access control is a Firewall. The next screen will discuss the concepts of identification, authentication, and authorization.
7 Identity and Access Management Policy
A security practitioner should understand the importance of identity and access management policy. The first element of an effective access control program in an organization is to establish identity and access management policy, and related standards and procedures. The identity and access management policy specifies the way users and programs are granted access through proper identification and authentication. It specifies the guidelines of granting privileges to various resources. It also improves the governance process and prevents inconsistencies in provisioning, administration, and access control management. Let us discuss the concepts of Identification, Authentication, and Authorization in the next screen.
8 Identification Authentication and Authorization
To be able to access a set of data or a resource, a subject has to be identified, authenticated, and authorized. The process is shown here. Identification describes a method of ensuring that a subject, such as, user, program, or process, is a real entity as it claims to be. Identification can be provided by username or account number. Authentication is the testing or reconciliation of evidence of a user’s identity. It establishes the user’s identity and ensures that the users are genuine. To be properly authenticated, the subject is usually required to provide a second piece to the credential set. This piece could be password, passphrase, cryptographic key, Personal Identification Number or PIN, anatomical attribute, or token. Authorization is granting access to an object after the subject has been properly identified and authenticated. It is the rights and permissions granted to an individual or a process, which enable their access to a computer resource. Once a user’s identity and authentication are established, authorization levels determine the extent of system rights that an operator can hold. For example a user authorized for net banking transaction. The following screen will focus on privacy, accountability, and identity management.
9 Identity Management
Identity Management is the use of different products to identify, authenticate, and authorize the users through automated means. It describes the management of individual identities, their authentication, authorization, and privileges or permissions within or across the system and enterprise boundaries. The goal is to increase the security and productivity while decreasing the cost, downtime, and repetitive tasks. Let us discuss the Identity and Access Provisioning Lifecycle in the next screen.
10 Identity and Access Provisioning Lifecycle
In this screen we will focus on identity and access provisioning lifecycle. After an appropriate access control model has been selected and deployed, the identity and access provisioning lifecycle must be maintained and secured. We will learn each access control model later in this domain. Several organizations follow best practices for issuing access, however many of them lack formal processes. Identity and access provisioning lifecycle refers to the provisioning, review, revocation of all accounts. Provisioning includes creating new accounts and provisioning them with appropriate rights and privileges. Review can be called auditing. It includes checking all the accounts periodically. It also includes disabling the inactive accounts and checking for excessive privileges. Revocation includes disabling employees account as soon as they leave the organization. It also includes setting account expiry date for temporary accounts. An appropriate organization policy should be followed for deleting an expired account. As a best practice, always include account revocation as a required step in the access provisioning lifecycle. This process should be tightly coordinated with the human resources department and track not only terminations but also horizontal and vertical moves or promotions within the organization. The next topic focuses on Identification, Authentication, and Authorization.
11 Identity and Access Provisioning Lifecycle (contd.)
To ensure an application is authorized to make requests to potentially sensitive resources, the system can use digital identification, such as a certificate or one-time session. There are several common methods of identification used by organizations, and the type used may vary depending on the process or the situation. Some of the most common types of identification methods include: Username, User ID, Account number, Personal Identification Number (PIN), Identification Badges, MAC Address, IP Address, Email Address, and Radio Frequency Identification (RFID). Let us discuss the guidelines for user identification in the next screen.
12 Guidelines for User Identification
The three important security characteristics of identity are uniqueness, non-descriptiveness, and secure issuance. User identification must be unique so that each entity on a system can be explicitly identified. Each individual user requires a unique user identifier in a particular access control environment. User identification should be non-descriptive and should not disclose any information about the user. From the security perspective, the identity or ID should not expose the associated role or job function of the user. The process of issuing identifiers must be well documented and secure. The entire security system can be compromised if an identity is inappropriately issued. Let us discuss the methods of verifying identification information in the following screen.
13 Verifying Identification Information
In this screen we will talk about verifying identification information. The function of Identification is to map a known quantity to an unknown entity to make it known. The known quantity is called the identifier (or ID) (read as I-D) and the unknown entity needs identification. A basic requirement for identification is the ID be unique. IDs may be scoped, that is, they are unique only within a particular scope. Once a user has been identified, through the user ID or a similar value, the next step is authentication. There are three general factors that can be used for authentication and they are, something a person knows; something a person has, and something a person is Something a person knows can be password, PIN, mother’s maiden name, or combination to a lock. Authenticating a person by something that the individual knows is usually the least expensive to implement. The downside to this method is that another person may acquire this knowledge and gain unauthorized access to a system or facility Something a person has can be a key, swipe card, access card, or badge. This method is common for accessing facilities, however, it can also be used to access sensitive areas or to authenticate systems. A downside to this method is that the item can be lost or stolen, which could result in unauthorized access. Something specific to a person is based on a physical attribute. Authenticating a person’s identity based on a unique physical attribute is referred to as biometrics. The next screen deals with strong authentication methods.
14 Strong Authentication
Authentication that relies only on a user id and password is too weak for many environments that store or manage sensitive information as they can be easily compromised. Organizations often employ some method of strong authentication that relies on more than just what users know. The two general types of strong authentication are two-factor authentication and three-factor authentication. Two-factor authentication involves the use of information that the user knows, such as a user id and password, and also something the user has, such as a smart card or token. It is considerably more difficult for an intruder to break into an environment’s authentication when two-factor authentication is used. To achieve the highest level of security, systems require a user to provide all the three types of authentication – password, smartcard, and biometric. Organizations that are not satisfied with the additional security afforded by two-factor authentication may consider using biometric as the third factor. An example would be the use of Smart card plus PIN plus Fingerprint. The single greatest advantage of biometrics is that while an intruder can obtain an individual’s user id and password, and perhaps even a two-factor authentication device, it is exceedingly difficult for an intruder to obtain or impersonate a physical or physiological characteristic of another person.
In this screen, we will discuss the characteristics of biometrics in detail. Biometrics verifies an individual’s identity by analyzing a unique personal attribute or behavior, which is one of the effective and accurate methods of verifying identification. It is also sophisticated, expensive, and complex. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. The two categories of biometric identifiers are physiological and behavioral characteristics. Physiological characteristics can include voice, DNA, or hand print. Behavioral characteristics are related to the behavior of a person, including, but not limited to typing rhythm, gait, and voice. Biometrics is a sophisticated technology, thus, it is much more expensive and complex than the other types of identity verification processes. Apart from the accuracy of the biometric system, the other factors important for the selection of biometrics are Acceptance, Throughput Rate, and Enrolment Time. User acceptance of biometric system is an important factor. It depends on privacy, intrusiveness, and psychological or physical discomfort. Example: In Retina scan the potential exchange of body fluid is a disadvantage. Throughput rate is also called biometric system response time. It is the time taken to process authentication request. It describes the process of authentication in a biometric system. Throughput rate should be around 6 to 10 seconds. Enrollment time is the time taken by the biometric system to register and create an account for the first time. It describes the process of registering with a biometric system. Users provide a username (identity), a password or a PIN, and then biometric information by taking a photograph of their irises or by swiping their fingers on a fingerprint reader. Enrollment is a one-time process that should take less than 2 minutes. In the next screen, we will look at a list of biometrics used for identification today.
16 Types of Biometrics
Let us look at each type of biometrics. A number of biometric controls are used today. The following subsections describe the major implementations and their specific details pertaining to access control security. Fingerprints are made up of ridge endings and bifurcations exhibited by the friction ridges and other detailed characteristics that are called minutiae. It is the distinctiveness of these minutiae that gives each individual a unique fingerprint. (Pronounce mi-NOO-shee-uh) The shape of a person’s hand (the length and width of the hand and fingers) defines hand geometry. This trait differs significantly between people and is used in some biometric systems to verify identity. The iris is the colored portion of the eye that surrounds the pupil. The iris has unique patterns, rifts, colors, rings, coronas, and furrows. A system that reads a person’s retina scans the blood vessel pattern of the retina on the rear of the eyeball. This pattern has shown to be unique in different people. Voice Print is a biometric system that is programmed to capture a voice print and compare it to the information captured in a reference file. This process can differentiate one individual from another. Keyboard dynamics captures electrical signals when a person types a certain phrase. Signature dynamics is a method that captures the electrical signals when a person signs a name. Facial Scan is a system that scans a person’s face, takes many attributes and characteristics into account like bone structures, nose ridges, eye widths, forehead sizes, chin shapes, etc. In the next screen, we will look at how biometrics can be evaluated for accuracy.
17 FRR FAR CER
The accuracy of biometric systems should be considered before implementing a biometric control program. Three metrics are used to judge biometric accuracy: False Reject Rate or FRR (read as F-R-R), False Accept Rate or FAR (read as F-A-R), and Crossover Error Rate or CER (read as C-E-R). As the accuracy of a biometric system increases, false acceptance rates rise and false rejection rates drop. Conversely, as the accuracy decreases, false acceptance rates drop and false rejection rates rise. Figure shows a graph depicting the FAR versus the FRR. The Crossover Error Rate or CER is the intersection of both lines of the graph. For example, a system with a CER of 3 has greater accuracy than a system with a CER of 4. Customers can use these rating when comparing biometric systems for accuracy. A false rejection occurs when an authorized subject is rejected by the biometric system as unauthorized. A false rejection is also called a Type I error. False rejections cause frustration in authorized users, reduction in work due to poor access conditions, and expenditure of resources to revalidate authorized users. A false acceptance occurs when an unauthorized subject is accepted as valid. If an organization’s biometric control is producing several false rejections, the overall control might have to lower the accuracy of the system by lessening the amount of data it collects when authenticating subjects. When the data points are lowered, the organization risks an increase in false acceptance rates, thus risking an unauthorized user gaining access. This type of error is also called a Type II error. The crossover error rate describes the point where the False Reject Rate and False Accept Rate are equal. The CER is also known as the Equal Error Rate (EER). It describes the overall accuracy of a biometric system.
In this screen we will focus on passwords. User identification coupled with a reusable password is the most common form of system identification and authorization mechanisms. A password is a protected string of characters used to authenticate an individual. As stated previously, authentication factors are based on what a person knows, has, or is. A password is what the user knows. It is important that passwords are strong and properly managed. The main problems with passwords are they are insecure, can be easily broken, inconvenient for users to remember, and repudiable. Some of the common password attacks are: Dictionary attack can be carried out using tools such as Crack, John the Ripper, etc. Brute force attack using l0phtcrack ( pronounce as Lophtcrack) Hybrid Attack which includes both Dictionary and Brute Force attacks. Other forms of attacks include the Trojan horse login program which uses Password sending Trojans and social engineering attacks. An example of social engineering attack is extracting password by tricking the users. The next screen describes how passwords can be protected.
19 Password Types
A passphrase is a sequence of characters that is longer than a password. A passphrase is secure than a password as it is longer, and thus harder to obtain by an attacker. In many cases the user is more likely to remember a passphrase than a password. Example of passphrase are, I will pass CISSP exam, Manchester United is my favorite team, A quick brown fox jumps over a lazy dog, etc. Cognitive passwords are opinion or fact -based information used to verify an individual’s identity. A user is enrolled by answering several questions based on life experiences. Passwords can be hard for people to remember, however, the same person will not forget simple personal information. The user can answer the questions to be authenticated, instead of remembering a password. Few of examples of cognitive passwords include: What is the name of the high school you attended? How many family members do you have? What is your mother’s maiden name? A onetime password or OTP (read as O-T-P) is also called a dynamic password which is used for authentication purposes. After the password is used, it is no longer valid. Thus, if a hacker obtained this password, it cannot be reused. This type of authentication mechanism is used in environments that require a higher level of security than static passwords can provide. The token device generates the onetime password for the user to submit to an authentication server. For example, OTP sent by bank via SMS. In the next screen, we will look at token devices and how they are used for authentication.
Tokens are used to prove the user’s identity and to authenticate the user to a system or an application. They can be software-based or hardware-based. An attacker can compromise the security by gaining control of the token and impersonate the token owner, and may also compromise the authentication protocol. Tokens must be secured as they may be cloned, damaged, lost, or stolen from the owner. Let us discuss Synchronous Token Device in the next screen.
21 Token Device—Synchronous
A synchronous token device synchronizes with the authentication server by using time or a counter as the core piece of authentication process. If the synchronization is time-based, the token device and the authentication server must hold the same time within their internal clocks. The time value on the token device and a secret key are used to create the onetime password, which is displayed to the user. RSA token is the example of time-based synchronous token. If the synchronization is counter-based, the user will need to initiate the logon sequence on the computer and push a button on the token device. This causes the token device and the authentication server to advance to the next authentication value. Kerberos token is an example of counter-based synchronous method.
22 Token Device—Asynchronous
A token device using an asynchronous token generating method uses a challenge/response scheme to authenticate the user. In this situation, the authentication server sends the user a challenge, a random value also called a nonce. The user enters this random value into the token device, which encrypts it and returns a value that the user uses as a onetime password. The user sends this value, along with a username, to the authentication server. If the authentication server can decrypt the value and it is the same challenge value that was sent earlier, the user is authenticated. Grid cards are the example of challenge/response asynchronous access device.
23 Memory Cards and Smart Cards
This screen will focus on memory cards and smart cards, which are used widely in identity verification. A memory card holds information, however, cannot process information. A memory card can hold a user’s authentication information, so the user only needs to type in a user ID or PIN and present the memory card, and if the data entered by the user matches the data on the memory card, the user is successfully authenticated. A smart card holds information and has the necessary hardware and software to process the information. It has a microprocessor and integrated circuits incorporated into the card, which enables it to process the information. Smart cards are of two types contact and contactless. The contact smart card has a gold seal on the card. When this card is inserted into a card reader, electrical fingers wipe against the card, in the exact position that the chip contacts are located. This will supply power and data I/O (read as input output) to the chip for authentication. The contactless smart card has an antenna wire that surrounds the perimeter of the card. When this card comes within an electromagnetic field of the reader, the antenna within the card generates enough energy to power the internal chip. There are two types of contactless smart cards and they are Hybrid and Combi. The hybrid card has two chips, with the capability of utilizing both the contact and contactless formats. The combi card has one microprocessor chip that can communicate with contact or contactless readers. In the next screen we will discuss some common attacks on smart cards.
24 Attacks on Smart Cards—Fault Generation and Micro-Probing
Some common attacks on smart cards are listed here. In Fault Generation, individual introduces computational errors into smart cards with the goal of uncovering the encryption keys used and stored on the cards. These “errors” are introduced by manipulating some environmental component of the card (changing input voltage, clock rate, temperature fluctuations, etc. The attacker reviews the result of an encryption function after introducing an error to the card, and also reviews the correct result, which the card performs when no errors are introduced. Analysis of these results allows an attacker to reverse engineer the encryption process, with the expectation of uncovering the encryption key. Microprobing uses needles to remove the outer protective material on the card’s circuits, by using ultrasonic vibration. Once this is completed, then data can be accessed and manipulated by directly tapping into the card’s ROM chips. Side-channel attacks are non-intrusive and are used to uncover sensitive information about how a component works without trying to compromise any flaw or weakness. In a non-invasive attack the attacker watches how a component works and how it reacts in different situations instead of trying to “invade” it with more intrusive measures. Some examples of side channel attacks that have been carried out on smart cards are differential power analysis by examining the power emissions that are released during processing, electromagnetic analysis by examining the frequencies that are emitted, and timing which checks how long a process takes to complete. Software attacks are also considered non-invasive attacks. A smart card has software just like any other device that does data processing, and where there is software there is a possibility of software flaws that can be exploited. The main goal of this attack is to input instructions into the card that will allow the attacker to extract account information, which can be used for fraudulent purchases. Many of these attacks can be disguised by using equipment that looks like a legitimate reader.
25 Access Criteria
This screen will focus on access criteria, which are the crux of authentication. Granting access rights to subjects should be based on the level of trust a company has and the subject’s need to know. How much a user is to be trusted or the extent of information entrusted to a user, are issues that must be identified and integrated into the access criteria. The different access criteria can be enforced by roles, groups, location, time, and transaction types. Using roles is an efficient way to assign rights to a type of user who performs a certain task. This role is based on a job assignment or function. Using groups is another effective way of assigning access control rights. If several users require the same type of access to information and resources, putting them into a group and then assigning rights and permissions to that group is easier to manage than assigning rights and permissions to individual separately. Physical or logical location can also be used to restrict access to resources. Some files may be available only to users who can log on interactively to a computer. This means the user must be physically present in front of the computer and enter the credentials locally and cannot log on remotely from another computer. Logical location restrictions are usually done through network address restrictions. Time of day, or temporal isolation, is another access control mechanism that can be used. If a security professional wants to ensure no one is accessing payroll files between the hours of 9:00 P.M. and 5:00 A.M., that configuration can be implemented. Transaction-type restrictions can be used to control the data accessed during certain functions and the commands that can be carried out on the data. An online banking program may allow a customer to view his account balance, but may not allow the customer to transfer money until he has a certain security level or access right. The next few screens will look at authorization concepts.
26 Authorization Concepts
Authorization is based mainly on the following four concepts. • Need-to-know Principle • Authorization Creep • Access Control List (ACL) • Default to Zero Need-to-know principle is based on the concept that individuals should be given access only to the information they require to perform their job duties. Management will decide what a user needs to know, or what access rights are necessary, and the administrator will configure the access control mechanisms to allow this user to have only that level of access, and thus the least privilege. For example a system administrator has full access to the system whereas a user will have limited access. Authorization Creep is when employees work at a company and move from one department to another, they often are assigned more access rights and permissions. Users’ access needs and rights should be periodically reviewed to ensure the principle of least privilege is being properly enforced. An example is a user shifting from finance to marketing department and being able to access both the systems. Access Control Lists or ACL (read as A-C-L) is a list of subjects that are authorized to access a particular object. An example is ACL on router. Default to Zero means all access control mechanisms should default to no access to provide the necessary level of security. It should also ensure no security holes go unnoticed. All access controls should be based on the concept of starting with zero access, and building on it. Instead of giving access to everything, and then taking away privileges based on need-to-know, the better approach is to start with no access and add privileges based on need-to-know. A wide range of access levels are available to assign to individuals and groups, depending on the application and/or operating system. A user can have read, change, delete, full control, or no access permissions. The statement that security mechanisms should default to no access means if nothing has been specifically configured for an individual or the group the individual belongs to, that user should not be able to access that resource. If access is not explicitly allowed, it should be implicitly denied. A Firewall is an example for this.
27 Identity Management Implementation
Identity Management technologies simplify management and administration of user identities in the organization, binding the users to established policies, processes, and privileges throughout the IT infrastructure. Some of the technologies utilized in Identity Management solutions include Password Management, Directory Management, Accounts Management, Profile Management, Web Access Management, and Single Sign-on. Let us discuss Password Management in the following screen.
28 Password Management
The use of passwords is a common practice for validating a user’s identity during the authentication process. In most traditional authentication solutions, password is the only undisclosed entity in a transaction. Hence, care should be taken in the process of creating passwords and its management by users and systems. It is necessary to define policies, procedures, and controls regarding passwords. A process governing user password should consider the following: • When the users choose their passwords, the operating system should enforce certain password requirements such as, a password should contain a certain number of characters, include special characters, upper and lower case letters, and so on. Many systems enable administrators to set expiration dates for passwords, forcing users to change them at regular intervals. • Create policies for password resets and changes. The system may also keep a list of the last five to ten passwords or password history and not let the users revert to the previously used passwords. • Use of last login dates in banners is also recommended. • A threshold can be set to allow a certain number of unsuccessful logon attempts. After the threshold is met, the user’s account can be locked for a period or indefinitely, which requires an administrator to unlock the account manually. • System can be configured to limit concurrent connections from the users. • An audit trail can be used to track password usage, and successful and unsuccessful logon attempts. This audit information should include the date, time, user ID, and workstation the user logged on from. • Common password management approaches include self-service password reset, assisted password reset, and password synchronization. Let us discuss Directory Management in the next screen.
29 Directory Management
A corporate directory is a comprehensive database that contains information pertaining to the company’s network resources and users. Mostly, hierarchical database format is followed by the directories. A standard directory protocol is used by applications to access data stored in a directory. A directory service manages the objects within the directory. Administrator manages and configures identification, authentication, authorization, and access control on the network and on individual systems using directory service. Directory uses labels and namespaces to identify an object. Using the directories, it is possible to configure several applications to share data about users instead of each system having its list of users, authentication data, and so on. This allows better data management and enhances the data consistency as it is used between systems, and supports uniform security control in the environment. We will focus on Directory Technologies in the following screen.
30 Directory Technologies
Centralized directory service for the enterprise supports many directory technologies. These technologies are supported by international standards. The most common directory standards are as follows: X.500 is a series of computer networking standards covering electronic directory services. The directory services were developed to support the requirements of X.400 electronic mail exchange and name lookup. It is organized under a common "root" directory in a "tree" hierarchy of country, organization, organizational unit, and person. Lightweight Directory Access Protocol (Read as: L-DAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Active Directory or AD is a directory service that Microsoft developed for Windows domain networks and is included in most Windows Server operating systems as a set of processes and services. An AD domain controller authenticates and authorizes all users and computers in a Windows domain type network—assigning and enforcing security policies for all computers and installing or updating software. AD makes use of LDAP. X.400 defines standards for Data Communication Networks for Message Handling Systems (MHS), which is commonly known as email. Let us discuss Account Management in the following screen.
31 Accounts Management
Account Management involves creating user accounts on every system, modifying the account privileges when required, and decommissioning the accounts when they are no more required. The administration of user identities across multiple systems are streamlined by Account management. Account Management uses the following features to facilitate a centralized, cross-platform security administration capability: It uses central facility for managing user access to multiple systems simultaneously; it uses a workflow system in which the users submit their requests for new, changed, or terminated systems access, and these requests are automatically sent to the appropriate people for approval; it allows automatic replication of user data over multiple systems and directories; the ability to load batch changes to user directories and; depending on the policies, and the changes to information, there should be automatic creation, change, or removal of access to system resources. Some major issues associated with Account Management includes time and cost of full-scale deployment, and interface with systems, applications and directories. We will focus on Profile Management in the next screen.
32 Profile Management
A Profile is defined as a collection of information associated with a particular user identity or a group. A user profile, in addition to the user ID and password, may include personal information, such as name, home address, telephone number, date of birth, and e-mail address. Sometimes, the profile also includes information related to rights and privileges on specific systems. It is important to maintain and update the information for Identity Management process. Self-service or administrative method can be applied to manage user profiles. A good self-service system helps to reduce the cost and time to implement the changes and also increases accuracy. Let us discuss Web Access Management in the following screen.
33 Web Access Management
Web Access Management or WAM makes use of software controls to control what users can access from web-based enterprise assets using their web browser. Password, digital certificate, token, and others can be used to authenticate users. WAM acts as a gateway between users and corporate web-based resources. It also provides Single Sign-On capability. Let us discuss Single Sign-On or SSO in the following screen.
34 Single Sign-On (SSO)
This screen will focus on single sign-on. Single Sign-On, or SSO (read as S-S-O), is an access control method where a user can authenticate once and be able to access different information systems without individual re-authentication. In other words it allows a user to enter credentials one time and be able to access all corporate resources in primary and secondary network domains. In SSO, applications and systems are logically connected to a centralized authentication server that controls user authentication. When a user first logs in to an application, they will be required to provide a user id and password (or two-factor, biometric, etc.). The application—and the centralized service—will recognize the user as logged in. Later, when the user wishes to access a different application or system, the user’s logged-in state will be recognized and directly admitted to the application. As seen in the table, the advantage of SSO is the convenience of eliminating many redundant logins for busy end users, i.e. (pronounce as “that is”), user has one password for all enterprise systems and applications and only one strong password can be remembered and used. A user account can be quickly created on hire, and deleted on dismissal. Another advantage is it gives the centralized management of access for many applications and systems. A distinct disadvantage of SSO is it is hard to implement and get working. It can also be a source of centralized point of failure. Another disadvantage in SSO is if a user’s login credentials are compromised, an intruder will have access to all the applications and systems that the user has. In the next screen, we will look at a few SSO technologies.
35 SSO Technologies
Some examples of SSO technologies are listed below. Kerberos authentication protocol uses a key distribution or KDC (read as K-D-C) and tickets, and is based on symmetric key cryptography. The Secure European System for Applications in a Multivendor Environment or SESAME (read as one word SESAME) authentication protocol uses PAS (read as P-A-S) and PACs (read as P-A-Cs), and is based on symmetric and asymmetric cryptography. In Security Domain, all the resources working under the same security policy are managed by the same group. Directory Services is a network service which identifies resources such as, printers and files servers on a network, and makes them available to users and programs. Thin Clients or Dumb Terminals rely on a central server for access control, processing, and storage. An Organization can implement its SSO solution by developing a script. In the next screen, we will focus on Kerberos.
Kerberos is the name of a three-headed dog that guards the entrance of Hades (underworld) in Greek mythology. The Kerberos security system guards a network with three elements: authentication, authorization, and auditing. Kerberos is an authentication protocol and was designed in the mid-1980s as a part of MIT’s Project Athena. It works in a client/server model and is based on symmetric key cryptography. The protocol has been used for years in UNIX and in Windows operating systems. Kerberos is an example of a single sign-on system for distributed environments, and is a de facto standard for heterogeneous networks. It uses symmetric key cryptography and provides end-to-end security. Most Kerberos implementations work with shared secret keys. The major roles of Kerberos are described here. The Key Distribution Center or KDC (read as K-D-C) holds all users’ and services’ secret keys. It provides an authentication server, as well as key distribution functionality. The clients and services trust the integrity of the KDC, and this trust is the foundation of Kerberos security. The KDC is divided into Authentication Server or AS (read as A-S) and Ticket Granting Server or TGS (read as T-G-S). Authentication Server or AS (read as A-S) authenticates the identities of entities on the network and TGS—Generates unique session keys between two parties. Parties then use these session keys for message encryption. The KDC provides security services to ‘Principals’, which can be users, applications, or network services. The KDC must have an account for, and share a secret key with, each principal. For users, a password is transformed into a secret key value. The secret key is used to send sensitive data back and forth between the principal and the KDC, and is used for user authentication purposes. Ticket Granting Server or TGS (read as T-G-S): Tickets are generated by the KDC and given to a principal. The ticket enables one principal to authenticate another principal. For example, user needs to authenticate another principal, let’s say a print server. A KDC provides security services for a set of principles. This set is called a Realm in Kerberos. The KDC is the trusted authentication server for all users, applications, and services within a realm. One KDC can be responsible for one or several realms. Realms are used to allow an administrator to logically group resources and users. In the next screen, we will look at Kerberos steps.
37 Kerberos Steps
The components that participate in Kerberos authentication are shown in Figure. When a user wishes to log on to the network and access a print server, the following steps are performed: Client contacts the KDC, which acts as an authentication server, to request authentication. The client authenticates on the Authentication Server or AS. This creates a user session that will expire, typically in 8 hours. The KDC sends Client a session key, encrypted with Client’s secret key. The KDC also sends a Ticket Granting Ticket (TGT), encrypted with the TGS’s secret key back to the client system. Client decrypts the session key and uses it to request permission to print from the TGS. The client sends the TGT to the TGS to get authenticated. Checking the validity of the session key of the Client (and proving the identity claim), the TGS sends client a C/S session key (second session key) to use for printing. The TGS also sends a service ticket or ST encrypted with the print server’s key. The TGS creates an encrypted key with an expiration time and sends it to the client. The client sends the Service Ticket to the print server. The print server confirms that the ST is still valid by checking the expiration time. Seeing a valid C/S session key, the server recognizes the permission to print, and also knows that client is authentic. The communication is established between the client and the print server. In the next screen, we will look at some of Kerberos’ drawbacks.
38 Problems with Kerberos
Some of the problems of Kerberos are listed here. A single KDC is a sole point of failure and performance bottleneck. If the KDC goes down, no one can access needed resources. Redundancy is necessary for the KDC. Computers must have clocks synchronized within 5 minutes of each other. The KDC must be able to handle the number of requests it receives on time. It must be scalable. Secret keys are temporarily stored on the user’s workstations, which mean it is possible for an intruder to obtain these cryptographic keys. If the workstation is compromised the identities can be forged. If the KDC is hacked, security is lost. Kerberos is vulnerable to password guessing. The KDC does not recognize a dictionary attack. Network traffic is not protected by Kerberos if encryption is not enabled.
39 Business Scenario
Hilda Jacob, General Manager—IT security, Nutri Worldwide Inc. (read as ink), needed an advanced security system which can seamlessly integrate with the existing web based application. This system should give an option for onetime password or dynamic password security token as the second factor. The security team that Kevin is a part of opted for a third-party online security and Identity management tool. This raised the confidence among all the employees for using the web applications and doing all their online transactions. The new multi-factor authentication system integrated fully with the existing application and also fulfilled all the organization’s needs Which are the two factors that Kevin needs to use for a two factor authentication? Kevin should use any 2 factors out of, Something you know (Password, PIN) Something you have (Smart card, ATM card) Something you are (Biometrics—Fingerprint, Retina)
40 Access Control Types—Security Layer
Access control types or methods can fall into one of three categories: administrative, technical, or physical. Administrative (also called directive) controls represent a broad set of actions, policies, procedures, and standards put in place in an organization to govern the actions of people and information systems. They are implemented by creating and following organizational policy, procedure, or regulation. User training and awareness fall into this category. Technical controls (also called logical controls) are the programs and mechanisms on information systems that control system behavior and user access. They are implemented using software, hardware, or firmware that restricts logical access in an information technology system. Examples are protocols, encryption, system access etc. Physical controls are used to manage physical access to information systems such as application servers and network devices. They are implemented with physical devices, such as locks, fences, gates, and security guards. To understand and appropriately implement access controls, understanding the benefits that each control can add to security is vital.
41 Access Control Types—Functionality
There are six access control types: Preventive controls prevent actions. They apply restrictions to what a potential user, authorized or unauthorized, can do. An example of an administrative preventive control is pre-employment drug screening, which is designed to prevent an organization from hiring an employee who is using illegal drugs. Detective controls are controls that send alerts during or after an attack. Intrusion detection systems alerting after an attack, Closed-Circuit Television (CCTV) cameras alerting guards to an intruder, and a building alarm system triggered by an intruder are all examples of detective controls. Corrective controls correct a damaged system or process. They work hand in hand with detective controls. Antivirus software has both components. First, it runs a scan and uses its definition file to detect if there is any software that matches its virus list. If it detects a virus, the corrective controls take over, placing the suspicious software in quarantine or deleting it from the system. Deterrent Controls reduce the likelihood of a vulnerability being exploited without actually reducing the exposure. After a security incident has occurred, recovery controls may be needed to restore functionality to the system and the organization. Recovery means the system must be recovered and reinstalled from OS media or images, data restored from backups, etc. A compensative control is an additional or alternative security control put in place to compensate for weaknesses in others.
42 Business Scenario
In the current financial year Nutri Worldwide Inc., has decided to focus on information security. As a part of this initiative, security training on strengthening the password management process was arranged. Kevin was a part of this training. Would this training fall under the Administrative controls or Technical controls category? This training falls under the Administrative controls category.
43 Access Control Models—DAC
We will now focus on access control models. An access control model is a framework that dictates how subjects access objects. Each model type uses different methods to control the access, and each has its own merits and limitations. In Discretionary Access Control, the owner of the resource specifies which subjects can access specific resources. This model is called discretionary because the control of access is based on the discretion of the owner. In a DAC model, access is restricted based on the authorization granted to the users. This means users are allowed to specify the type of access for the objects they own. If an organization is using a DAC model, the network administrator can allow resource owners to control who has access to their files. The most common implementation of DAC is through Access Control Lists or ACLs, which are dictated and set by the owners and enforced by the operating system.
44 Access Control Models—MAC
MAC is based on a security labeling system. Users have security clearances, and resources have security labels that contain data classifications. In this model, users and data owners do not have as much freedom to determine who can access files. The operating system takes the final decision and can override the users’ wishes. This model is used in environments where information classification and confidentiality is important. This model is structured and strict, and is based on a security label system. Users are given a security clearance (secret, top secret, confidential, etc.), and data is classified in the same way. The clearance and classification data is stored in the security labels, which are bound to the specific subjects and objects. When the system takes a decision on fulfilling a request to access an object, it is based on the clearance of the subject, the classification of the object, and the security policy of the system.
45 Access Control Models—RBAC
A Role-Based Access Control (RBAC) model which is also called Non-discretionary Access Control uses a centrally administrated set of controls to determine how subjects and objects interact. It allows access to resources based on the role of the user within the company. In an organization where there are frequent personnel changes, non-discretionary access control is useful as the access controls are based on the individual’s role or title within the organization. These access controls do not need to be changed whenever a new person takes over that role. There are four commonly used RBAC architectures, they are: Non-RBAC, Limited RBAC, Hybrid RBAC, and Full RBAC. Let us discuss a business scenario in the next screen.
46 Business Scenario
Kevin had worked on a project for the human resources department last year. The HR department wanted to strengthen the security model deployed for protection of highly confidential data—the salaries of executive employees. The access to this data is usually given to the senior HR managers only. What security model is deployed in such cases? When information classification and confidentiality is very important the MAC model is used.
47 Access Control Concepts
The following factors support different Access Control Models. Specific rules indicate what can and cannot happen between a subject and an object. It is considered a “compulsory control” because the rules are strictly enforced and not modifiable by users. A series of defined rules, restrictions, and filters are used for accessing objects within a system. The rules are in the form of “if/then” statements. An example is a proxy firewall that only allows users to web-surf to a pre-defined approved content. For example, “If the user is authorized to surf the Web, and the site is on the approved list, then allow access”. Routers and firewalls use rule based access control. Another factor to consider for controlling access is by restricting users to specific functions based on their role in the system. Constrained user interfaces restrict users’ access abilities by not allowing them to request certain functions or information, or to have access to specific system resources. The major types of restricted interfaces are: Menus, Restricted Shells, Views, and Physically constrained interfaces. An Access Control Matrix (ACM) is a table of subjects and objects indicating what actions individual subjects can take on individual objects. Matrices are data structures that programmers implement as table lookups that will be used and enforced by the operating system. The access rights can be assigned directly to the Subjects (capabilities) or to the Objects (ACLs). Access Control Matrix can be used to summarize the permissions a subject has for various system objects as shown in the figure. This is a simple example, and in large environments an ACM can become complex. However, it can be helpful during system or application design to ensure security is applied appropriately to all subjects and objects throughout the application. Some access control decisions are affected by the actual content of the data rather than the overall organizational policy. Access to objects is determined by the content within the object. This often is used in databases. The content of the database fields dictates which users can see specific information within the database tables. Let us look at a few examples. Content dependent filtering is used when corporations employ email filters that look for specific strings, such as “confidential”, “social security number”, “top secret”, and any other types of words or images that the company deems unacceptable. Web Proxy servers may be content based. Context differs from content because access decisions are based on the context of a collection of information rather than the sensitivity of data. A system using context-dependent access control “reviews the situation” and then makes a decision. For example, firewalls make context-based access decisions when they collect state information on a packet before allowing it into the network.
48 Types of Access Control Administration
The two types of access control administration are Centralized and Decentralized. Centralized access control is concentrated at one logical point for a system or organization. Instead of using local access control databases, systems authenticate via third-party authentication servers. Centralized access control can be used to provide Single Sign-On (SSO), where a subject may authenticate once and access multiple systems. One entity (department or individual) is responsible for overseeing access to all corporate resources. This type of administration provides a consistent and uniform method of controlling users’ access rights. The advantage of centralized access control is strict control and uniformity of access. The disadvantage is Central administration can be overloaded. Let us look at an example. The security administrator (entity) configures the mechanisms that enforce access control, processes any changes that are needed to a user’s access control profile, disables access when necessary, and completely removes these rights when a user is terminated, leaves the company, or moves to a different position. In Decentralized Access Control, resource owners are responsible for access control. This method gives control of access to people closer to the resources, who may better understand who should and should not have access to certain files, data, and resources. Decentralized access control allows the IT administration to be closer to the mission and operations of the organization. With it, an organization spans multiple locations, and the local sites support and maintain independent systems, access control databases, and data. Decentralized access control is also called distributed access control. The advantage of decentralized access control is it is more flexible compared to centralized access control. However, controls may not be uniform throughout organization, which can be its major disadvantage. Let us look at an example. A trusted computer system—a system that has hardware and software controls ensuring data integrity. In the next screen, we will look at the authentication system, RADIUS.
49 Remote Authentication Dial-In User Service (RADIUS)
Remote Authentication Dial-In User Service (RADIUS) is a third-party authentication system. It is a network protocol and provides client/server authentication and authorization, and audits remote users. A network may have access servers, a modem pool, DSL, ISDN, or T1 line dedicated for remote users to communicate. The access server requests the remote user’s logon credentials and passes them to a RADIUS server, which houses the usernames and password values. The remote user is a client of the access server, and the access server is a client of the RADIUS server. It encrypts only passwords. RADIUS is a client/server protocol that runs in the application layer, using UDP as transport, It uses 8 bits for the Attribute Value Pair (AVP) field. It is described in RFCs 2865 and 2866, and uses User Datagram Protocol (UDP) ports 1812 (authentication) and 1813 (accounting). RADIUS is considered an “AAA” system, comprising three components: Authentication, Authorization, and Accounting. It authenticates a subject’s credentials against an authentication database. It authorizes users by allowing access to specific data objects. It accounts for each data session by creating a log entry for each RADIUS connection made. The three functions that RADIUS serves are: to authenticate users or devices before granting them access to a network, to authorize those users or devices for certain network services, and to account for usage of those services. Next, we will look at another type of authentication system, TACACS (pronounce as “tack-axe”) and TACACS Plus (pronounce as “tack-axe-plus”).
50 TACACS and TACACS+
Terminal Access Controller Access Control System (TACACS) (pronounce as “tack-axe”) is a remote authentication protocol used to communicate with an authentication server commonly used in UNIX networks. It is a centralized access control system that requires users to send an ID and a static (reusable) password for authentication. TACACS (pronounce as “tack-axe”) uses UDP port 49 (and may also use TCP). Reusable passwords are not secure, hence, the improved TACACS+ provides better password protection by allowing two-factor strong authentication. It is not backward compatible with TACACS (pronounce as “tack-axe”). It uses TCP port 49 for authentication with the TACACS+ (pronounce as “tack-axe-plus”) server. It allows users to employ dynamic (one-time) passwords, which provides more protection. It is more secure than RADIUS and encrypts all data. Let us discuss diameter in the next screen.
Diameter is RADIUS’s successor, designed to provide an improved Authentication, Authorization, and Accounting (AAA) framework. RADIUS provides limited accountability and has problems with flexibility, scalability, reliability, and security. DIAMETER also employs encryption to protect sensitive information. DIAMETER supports all forms of remote connectivity and uses 32 bits for the Attribute Value Pair (AVP) field. It uses TCP port 3868. Diameter security uses existing encryption standards including Internet Protocol Security (IPSec) or Transport Layer Security (TLS). It is a peer-based protocol which allows client or server to initiate communication. It also has better error detection, correction, failover functionality than RADIUS. The subsequent screen will cover the next topic, i.e. (pronounce as “that is”), accountability.
Accountability holds users accountable for their actions. This is done by logging and analyzing audit data. Enforcing accountability helps keep “honest people honest.” For some users, knowing data is logged is not enough to provide accountability: They must know the data is logged and audited, and that sanctions may result from violation of policy. Auditing capabilities ensure users are accountable for their actions, verify if the security policies are enforced, and can be used as investigation tools. Accountability is tracked by recording user, system, and application activities. This recording is done through auditing functions and mechanisms within an operating system or application. Audit trails contain information about operating system activities, application events, and user actions. Items and actions to be audited can become an endless list. A security professional should be able to assess an environment and its security goals. The professional should have knowledge of actions to be audited, and what to do with the captured information—without wasting extra disk space, CPU power, and staff time. The following gives a broad overview of the items and actions that can be audited and logged: System-level events like System performance, Logon attempts (successful and unsuccessful), Date and time of each logon attempt, etc. Application-level events like Error messages, Modifications of files, etc. User-level events like Identification and authentication attempts, Commands initiated, etc. The next topic is access control monitoring which is discussed in the following screens.
53 Accountability (contd.)
Non-repudiation plays an important role in accountability to ensure users, processes, and actions are responsible for impacts. Following are the vital requirements to ensure accountability of actions: • Strong identification; • Strong authentication; • Policies to enforce accountability; • Consistent and accurate audit logs; • User awareness and training; • Comprehensive, thorough, and timely monitoring; • Organizational behavior towards supporting accountability; and • Independent audits. Let us discuss Session Management in the following screen.
54 Session Management
55 Registration and Proof of Identity
Identity proofing is the process of establishing a reliable relationship that can be trusted electronically between the individual and the credential for electronic authentication purposes. This is done by collecting and verifying information to prove that the person who has requested a credential, an account, or other special privilege is indeed who he or she claims to be. It involves in-person evaluation of a driver’s license, birth certificate, passport, or any other identity issued by the government. Certification and accreditation should be carried out for the process of identity proofing and registration. Let us take a look at Credential Management System in the next screen.
56 Credential Management Systems
Credential Management plays an important role in an organization’s overall security. All access controls rely on the use of credentials to validate the identities of users, applications, and devices. A security practitioner can build a good Credential Management System by incorporating the following: Password history, strong passwords, fast password retrieving, generating passwords effortlessly, well-defined access control, controlling credentials, failover and redundancy, safely keeping passwords , preparedness for disaster, and tracking and auditing access. In the next screen, we will discuss the risks and benefits associated with Credential Management System.
57 Credential Management Systems—Risks and benefits
Following are the risks and benefits associated with Credential Management System: Some of the major risks are: attackers can compromise credential management system, and gain access to vital credentials such as administrators. Once the credentials are compromised, reissuing credentials can be time consuming and expensive. Compromise of credentials may lead to compliance issues. Some of the benefits of using credential management system include giving a high level of assurance and meeting the required security standard. It also simplifies compliance, administration, and auditing. In the next screen, we will focus on Federated Identity Management.
58 Federated Identity Management
Federated Identity Management addresses the Identity Management issues when multiple organizations have the need to share the same applications and users between them. SSO implementations involve managing users within a single organization for accessing multiple applications and are managed by a single security infrastructure. However, in a federated environment, each organization in the federation subscribes to a common set of policies, standards, and procedures for the provisioning and management of user identification, authentication, and authorization information. A trust relationship is established among participating organizations. In the next screen, we will focus on Federated Identity Management models.
59 Federated Identity Management Models
In Cross-certification model, every organization must individually certify every other participating organization. Managing the trust relationships become difficult as the number of participating organizations increases In Trusted third party or Bridge model, every organization subscribes to the standards and practices of a trusted third party, which manages the verification and due diligence process for all the participating organizations. After the verification by the third party, the participating organizations are considered trustworthy by all the other participants. For the participating organization’s identity verification purposes, the third party acts as a trusted party or bridge between them. Let us continue focusing on Federated Identity Management models in the next screen.
60 Federated Identity Management Models (contd.)
Security Assertion Markup Language (SAML) 2.0. is a standard for exchanging authentication and authorization data between different security domains. SAML 2.0 is an XML-based protocol that enables Web-based authentication and authorization scenarios, which includes Single Sign-On (SSO). Security tokens containing assertions are used to pass information about a principal which is usually an end-user; between identity providers such as SAML authority and a web service provider such as web service. The SAML specification defines three roles. They are: the principal which is typically a user, the identity provider or IdP, and the service provider or SP. Any identity attributes can be shared between the two federation partners and they can choose to share anything in an SAML assertion or message payload, provided it is supported by XML. Let us continue focusing on Federated Identity Management models in the next screen.
61 Federated Identity Management Models (contd.)
Once In-Unlimited Access model is used where the organizations do not need to restrict resources in a very granular manner or manage user access. This differs from an SSO model. SSO typically manages authentication and access control behind the scenes from the user. An organization may employ a Once In-Unlimited Access (OIUA) model by having a separate area of their intranet that is available to all the employees without the need to identify or authenticate each individual application. In some cases, the applications may not require authentication. The security practitioner must ensure user identification and authentication was properly handled before the user accesses the system. Let us discuss Identity as a Service in the next screen.
62 Identity as a Service
In Identity as a Service or IDaaS, a third-party service provider builds, hosts, and manages an authentication infrastructure. IDaaS can be considered as Single Sign-On or SSO for the cloud. The service is provided as third party management of identity and access control functions, including user life cycle management and Single Sign-On. An IDaaS is provided as a subscription-based managed service. A cloud service provider may provide subscribers through a secure portal, a role-based access to specific applications and entire virtualized desktops. Let us discuss the functionalities of identity as a service in the next screen.
63 Identity as a Service—Functionality
According to Gartner, the American information technology research and advisory firm, the functionalities of IDaaS include the following: Identity Governance and Administration or IGA includes the ability to provision identities held by the service to target applications. Access includes user authentication, Single Sign-On, and authorization enforcement. Intelligence includes logging events and reports. Some of the features and benefits of IDaaS systems are as follows: Federation: The IDaaS provides Federated Identity Management, which enables different systems to define user capabilities and access. Single Sign-On Authentication: IDaaS provides SSO capability which allows authenticated users to access multiple services without having to repeatedly supply credentials to each service. Granular Authorization Controls: Each user is allowed to access his or her authorized services and data in the cloud. Ease of Administration: Administration is simplified with a single management window for administering users and managing identity across multiple services. Integration with Internal Directory and External Services: Cloud Identity and Access Management or IAM (Read as: I-A-M) systems can integrate with in-house LDAP, Active Directory, and other services to replicate existing employee identity, roles, and groups into cloud services. Integration with new services is faster and easier as the IAM providers offer connectors to common cloud services which eliminates the need to write custom integration code. Let us discuss the possible issues with Identity as a Service in the next screen.
64 Identity as a Service—Possible Issues
The IAM vendors may not be able to provide Application Program Interface or API (Read as: A-P-I) for all the services. The security practitioners must create their own integration codes wherever required. The existing authorization and access rules may have to be updated for cloud service providers. Privacy of user’s information needs to be ensured by the security practitioner as this information is pushed into the cloud, and the organization may lose some control on it. Compared to in-house systems, getting Audit logs from a cloud service providers may be difficult. The security practitioner may have to address the security issues arising due to Bring Your Own Cloud (BYOC), which is a hybrid of mobile and cloud. Identity of Application needs to be verified along with the user’s identity to understand the source of the incoming request. Finally, the delay in rule propagation from internal IAM to Cloud IAM can cause some security issues. Let us discuss integrating third party service providers in the next screen.
65 Integrate Third-Party Identity Services
More and more companies are now adopting cloud computing services instead of in-house services. Third-party cloud services also manage identity and access management of the organization. Extending traditional corporate identity services outside the corporate environment requires integration of existing IAM systems with the cloud service provider(s). Managing user accounts within a cloud-based application and directory solution requires the following: Cloud Identity: Users are created and managed in the cloud, which eliminates the integration requirement with any other directory. Example: Microsoft Office 365. Federated Identity: Federated Identity Management will help in implementing Single Sign-On or SSO. Directory Synchronization: It uses an existing on-premises directory and synchronizes it to cloud provider’s directory such as Windows Azure AD. Let us take a look at some of the third party identity service providers.
66 Integrate Third-Party Identity Services (contd.)
Some of third party identity service providers are: Optimal IDM Virtual Identity Server Federation Services, PingFederate, Centrify , IBM Tivoli Federated Identity Manager 6.2.2, SecureAuth IdP 7.2.0, CA SiteMinder 12.52, Okta, OneLogin, NetIQ Access Manager 4.0.1, VMware Workspace Portal version 2.1, CA Secure Cloud, Dell One Identity Cloud Access Manager v7.1, and others. Let us discuss Unauthorized Disclosure of Information in the next screen.
67 Unauthorized Disclosure of Information
In this screen we will discuss unauthorized disclosure of information. Several technologies can make information available to unauthorized individual, with unfavorable results. It can be done intentionally, or unintentionally. Information can be disclosed unintentionally when one falls prey to attacks that specialize in causing this disclosure. These attacks include social engineering, covert channels, malicious code, and electrical airwave sniffing. Information can be disclosed accidentally through object reuse methods, which are explained next. Object Reuse means before someone uses a hard drive, floppy disk, or tape, it should be cleared of any residual information. The sensitive information that may be left by a process should be securely cleared before allowing another process the opportunity to access the object. This ensures that information not intended for this individual or any other subject is not disclosed. For example, old system allocated to a new employee without erasing the old data. Methods for clearing the information from media include destruction, degaussing, overwriting, etc. Under Emanation security, all electronic devices emit electrical signals. These signals can hold important information, and if an attacker buys the right equipment and positions in the right place, this information can be captured from the airwaves and data transmissions can be accessed similar to directly tapping on the network wire. The equipment can reproduce data streams and display the data on the intruder’s monitor, enabling the intruders to learn, uncover, and exploit confidential information. Countermeasures for this type of intrusions are TEMPEST, White Noise, and Control zone. Let us describe each of them briefly. TEMPEST equipment is implemented to prevent intruders from picking up information through the airwaves with listening devices. This type of equipment must meet the specific standards of providing TEMPEST shielding protection and must be rated for the same. Tempest refers to standardized technology that suppresses signal emanations with shielding material. The devices (monitors, computers, printers, etc.) have an outer metal coating, referred as a Faraday cage. This is made of metal with the necessary depth to ensure only a certain amount of radiation is released. TEMPEST technology is complex, cumbersome, and expensive, and therefore only used in highly sensitive areas that need this high level of protection. Two alternatives to TEMPEST exist: use white noise, or a control zone concept. White noise is a countermeasure used to keep intruders from extracting information from electrical transmissions. White noise is a uniform spectrum of random electrical signals. It is distributed over the full spectrum so the bandwidth is constant and an intruder is not able to decipher real information from the random noise or information. Another alternative to using TEMPEST equipment is to use the control zone concept. Some facilities use material in their walls to contain electrical signals. This prevents intruders from accessing information emitted via electrical signals from network devices. This control zone creates a type of security perimeter and is constructed to protect against unauthorized access to data or compromise of sensitive information. For example, creating control zones using Faraday’s cage and jammers. The next screen talks about the intrusion detection system.
68 Threats to Access Control
In this slide, we will discuss some of the common threats to information security and access control. Denial of Service (DoS) or Distributed Denial of Service (DDoS) is an attack that disables a service or makes it unreachable to its users. A Distributed Denial of Service (DDoS) attack is an attack launched from many places at once. The objective of a DDoS attack is to incapacitate a system or service in a way that is difficult to block. Back Door attacks happen when, during the development of an application, the creator or programmer has the ability to include special access capabilities hidden within the application known as backdoors or trap doors. Spoofing or masquerading is the act of appearing to a system as if a communication from an attacker is actually coming from a known and trusted source. Man-in-the-Middle is a form of active eavesdropping in which the attacker makes independent connections with the victims and relays messages between them, making them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. A replay attack is a form of network attack in which a valid data transmission is maliciously or fraudulently repeated or delayed. TCP Hijacking is an attack to gain unauthorized access to information or services of a computer system. Social Engineering is the art of manipulating people into performing actions or divulging confidential information. Dumpster Diving is a tactic used by information thieves to get corporate proprietary data, credit card numbers, and other personal information gleaned from what people and companies throw away. Password Guessing is a common form of attack against an information system. It is an attempt to guess someone’s legitimate logon credentials. The two common methods are Brute force attack and Dictionary attack. In a brute force attack, an intruder will try numerous passwords with the hope that one of them will work. A brute force attack consists of sequential guesses at a password until the correct value is found. This type of attack can take a long time, since there can be millions of possible passwords for a given user account. An intruder may use a dictionary attack, where the common passwords are tried, to check if the intruder can get lucky and gain entry into a target system. If the intruder is attempting to gain entry using a specific person’s user id, the next step is to try and find out personal information about that person such as birthdate, pet’s name, partner’s name, and try combinations of these to gain entry to a system. Trojan horse is a general term referring to programs that appear desirable, however, actually contain undesirable content. A Trojan horse purports to perform an action the user wants while secretly performing other potentially malicious actions. In phishing, the attacker sends forged e-mails that appear to have originated from a financial institution or other high-value organization. The forged e-mail will contain instructions that direct the recipient to click on a link and provide information on a form. The victim is led to believe that the institution will be helped by verifying these sensitive credentials, when in reality they are handing those credentials over to a criminal. In a pharming attack, an attacker directs traffic destined for a specific web site to an imposter site, usually for the purpose of harvesting logon credentials from unsuspecting users. Software exploitation occurs when an attacker uses a program that presents the user with a fake logon screen, which tricks the user into attempting to log on. The user is asked for a username and password, which are stored for the attacker to access at a later time. The user does not know this is not the usual logon screen as it looks exactly like the actual logon screen. A fake error message can appear, indicating that the user mistyped his credentials. At this point, the fake logon program exits and hands control over to the operating system, which prompts the user for a username and password. The user assumes he mistyped his information and doesn’t give it a second thought, but an attacker now knows the user’s credentials. The next screen will discuss the best practices for access control.
69 Protection against Access Control Attacks
Following are some of the common protection methods against the access control attacks that a security practitioner must take into consideration: Physical security of system, Controlling electronic access to password files, Strong password policy, Using multifactor authentication, Last login notification, Password file encryption, Masking passwords, Account lockout for unsuccessful login attempts, creating user awareness about security, and others. Let us discuss access control best practices in the next screen.
70 Access Control Best Practices
The following is a list of tasks that must be performed on a regular basis to ensure security stays at a satisfactory level. Deny access to systems by undefined users or anonymous accounts. Limit and monitor the usage of administrator and other powerful accounts. Suspend / Delay access capability after a specific number of unsuccessful logon attempts. Remove obsolete user accounts as soon as the user leaves the company. Suspend inactive accounts after 30 to 60 days. Enforce strict access criteria. Enforce the need-to-know and least-privilege practices. Disable unnecessary system features, services, and ports. Replace default password settings on accounts. Limit and monitor global access rules.
71 Access Control Best Practices (contd.)
Ensure logon IDs are non-descriptive of job function. Remove redundant resource rules from accounts and group memberships. Remove redundant IDs, accounts, and role-based accounts from resource access lists. Enforce password rotation. Enforce strong password requirements. Audit system, user events, actions, and review reports periodically. Protect audit logs.
Here is a quick recap of what we have learned in this domain: ● Access controls protect systems and resources from unauthorized access. ● Identity Management is the use of different products to identify, authenticate, and authorize the users through automated means ● Memory Cards and Smart Cards are used widely in identity verification. ● Controls are implemented to mitigate risk and reduce the potential for loss. ● The two types of access control administration are Centralized and Decentralized.
This concludes ‘Identity and Access Management.’ The next domain is ‘Security Assessment and Testing.’
About the On-Demand Webinar
About the Webinar