Network Management Tutorial

4.1 Welcome

Hello and welcome to the module of Network Management of the CompTIA Cloud Plus course offered by Simplilearn. In this module, we will cover network management protocols; and we will also discuss the best practices for implementing virtualization in production environment. We will discuss the objectives of this module in the next slide.

4.2 Objectives

By the end of this module, you will be able to: Explain various network resource monitoring techniques Discuss best practices for allocating physical resources, device virtualization for guest OS, storage devices, over-committing processor and memory resources, and networking Explain the different tools used for remote access We will begin by categorizing the different network resource monitoring techniques used, in the following slide.

4.3 Network Resource Monitoring Techniques

Various techniques are employed to monitor the network resources. These techniques can be grouped under three broad categories. They are: Protocols, specifically, Simple Network Management Protocol, Syslog, and Simple Mail Transfer Protocol. Technologies used, specifically, WMI Technology, and Intelligent Platform Management Interface Finally different organizational measures used. We will begin with Simple Network Management Protocol in the next slide.

4.4 Simple Network Management Protocol

Network administrators use Simple Network Management Protocol or SNMP as a standard TCP/IP network management protocol to monitor performance, error rates, and network availability. One of the reasons the SNMP is known for simplicity is that, it uses a small number of commands. The other reason is SNMP protocol's reliance on an unsupervised or connectionless communication link, leading directly to its widespread use, specifically in the Internet Network Management Framework. SNMP manager is independent of the agents. In case of agent failure, it continues to function, and vice versa. In the next slide, we will learn about the basic messages SNMP uses.

4.5 SNMP Messages

SNMP uses five basic messages to communicate between the SNMP manager and the SNMP agent. The GET and GET-NEXT messages allow the manager to request information for a specific variable. On receiving a GET or GET-NEXT message, the agent will issue a GET-RESPONSE message to the SNMP manager with either the requested information, or an error indication showing that the request cannot be processed. A SET message allows the SNMP manager to perform changes in the variables, typically called flags. It is responsible for communication process at the agent’s end in case of an alarm generation. The SNMP agent will then respond with a GET-RESPONSE message indicating the executed change or an error indication. The SNMP TRAP message allows the agent to immediately inform the SNMP manager of a significant event. GET, GET-NEXT, and SET messages are issued by the SNMP manager, while TRAP messages are initiated by an SNMP agent. Instead of waiting for the SNMP manager, TRAP message is used by DPS Remote Telemetry Units or RTUs to report alarms. In the following slide, we will focus on SNMP Management Information Base.

4.6 SNMP Management Information Base

SNMP Management Information Base or MIB and a set of commands are used by SNMP manager and the agent to exchange information. The SNMP MIB is organized in a tree structure with individual variables, like point status or description, represented as leaves on the branches. Object identifier or OID helps in distinguishing each variable individually in the MIB and in SNMP messages. The following slide will discuss the SNMP incorporated environment in detail.

4.7 SNMP Environment

The SNMP incorporated environment contains NMS or Network Management Station, AAA Server or Authentication, Authorization, and Accounting server, and the SNMP agent for efficient functioning. The SNMP agent contains Agent daemon (service), MIB, and subagent. Agent daemon is a service running on the SNMP agent. The agent daemon runs at UDP port number 161. MIB contains the commands to take the status of the AAA server. Subagents are collection of services which are incorporated with individual tasks. In this scenario, the NMS is checking the network availability, performance, and error occurrence of AAA server in the network. Through port number 162, the SNMP agent performs test of the AAA server by sending SNMP messages. Based on the response received by the SNMP Agent from AAA server, the reporting is done to the NMS agent. In the next slide, we will learn about Syslog, which is also a protocol to monitor network resources.

4.8 Syslog

Syslog as a protocol, allows a client system to send event messages across IP networks to event message collectors. Event message collectors are also known as Syslog Servers or Syslog Daemons. Syslog uses the User Datagram Protocol or UDP, port 514 to communicate. Being a connectionless protocol, UDP does not provide acknowledgments. Moreover, at the application layer, syslog servers do not send acknowledgment to the sender on receiving syslog messages. In fact, the sending devices send messages, even if the syslog server does not exist. Each Syslog message has a severity level indication which is shown in the slide. Level 0 represents emergency, which means the system is unusable. Level 1 represents alert, which means the action with respect to the alert needs to be taken immediately. Level 2 represents critical, which denotes the critical conditions for a specific process. Level 3 represents error conditions. Level 4 represents warning notice about a specific condition. Level 5 represents notice, which is normal but is in a significant condition. Level 6 represents informational messages, and level 7 represents debug-level messages. In the next slide, we will focus on Simple Mail Transfer Protocol.

4.9 Simple Mail Transfer Protocol

Simple Mail Transfer Protocol or SMTP runs on port 25 used to send email from the system. On configuration, SMTP sends an e-mail when a monitored event occurs. Short message service or SMS is another option for receiving alerts. SMS is a text messaging service that allows an alert to be sent to a mobile device. The use of SMS is a great way to notify an on-call technician when an alert has been generated after hours. SMS uses either GSM or CDMA technology. Web services are the components in the web server, which are used for accepting requests from clients and perform logical action. The web service provides a user interface that gives the administrator a quick and easy view of the entire environment. We will discuss the next category, technologies used, beginning with WMI technology in the next slide.

4.10 WMI Technology

The Microsoft Windows Management Instrumentation or WMI technology is the Microsoft implementation of the Distributed Management Task Force or DMTF, which is a Web-Based Enterprise Management or WBEM initiative. WMI extends the Common Information Model or CIM to represent management objects in Windows-based management environments. The Common Information Model based on DMTF standard, organizes management objects in unified, consistent, and logical manner. The WMI technology supports the syntax of CIM, the Managed Object Format or MOF, and a common programming interface. The MOF syntax defines the structure and contents of the CIM schema in human and machine-readable form. Query-based information retrieval and event notification are some of the services offered by WMI. Component Object Model or COM programming interface helps in accessing these services and the management data. The WMI scripting interface also provides scripting support. The next slide will discuss WMI technology architecture in detail.

4.11 WMI Technology Architecture

A management infrastructure has two components, they are: CIM Object Manager, and CIM Repository. In CIM Object Manager, applications have constant access to management data and a central storage area for management data, called the CIM Repository. In windows operating system, to invoke WMI service, a winmgmt command is used. This command will work only on windows server operating systems, like Windows Server 2003 or Windows Server 2008. WMI Providers act as the intermediaries between the CIM Object Manager and managed systems and use Windows Management Instrumentation Application Programming Interface or WMI APIs. They supply the CIM Object Manager with data from managed objects, handle requests on behalf of management applications such as Windows 2000 services, and generate event notifications. WMI providers are standard COM and Distributed COM or DCOM servers. In the next slide we will look into an interaction to list the types of WMI providers.

4.12 Types of WMI Providers

WMI is supplied with built-in providers that supply data from sources such as the system registry. The built-in providers include the following: Click each button for details of the provider. Active Directory Provider is a gateway to information stored in the Active Directory Service. Using a single API, it permits information from both WMI and Active Directory to be accessed. Windows Installer Provider allows a complete control of Windows Installer and installation of software through WMI. It also provides information about any application installed with Windows Installer. Raw performance counter information is utilized to compute the performance values present in the system monitor tool. This information is exposed by Performance Counter Provider, automatically showing any performance counters installed on a system. This provider is supported by Windows 2000. Registry Provider allows Registry keys to be created, read, and written. On modification of specified Registry keys, WMI events can be generated. SNMP Provider is a gateway to the systems and devices that use SNMP for management. SNMP MIB object variables can be read and written. WMI events get automatically mapped with SNMP traps. Event Log Provider gives access to data and event notifications from the Windows 2000 Event Log. Win32 Provider supplies information about the operating system, computer system, peripheral devices, file systems, and security information. WDM Provider gives low level driver information of Windows Driver Model for user input devices, storage devices, network interfaces, and communications ports. New aggregated classes can be built from existing classes, due to View Provider. Information of interest can be filtered from source classes. Information from multiple classes can be combined into a single class and data from multiple machines can be accumulated into a single view. The WMI technology also provides support for third party custom providers. Custom providers are used to service requests related to the managed objects that are environment-specific. Usually, providers utilize the MOF language while defining and creating classes. Providers use the WMI API to access the CIM Object Manager repository, and to respond to CIM Object Manager requests originally made by applications. In continuation of this discussion, let us discuss the concept of object manager in the following slide.

4.13 Object Manager

Object Manager handles the interface between management applications and data providers. WMI facilitates these communications by providing a common programming interface to Windows management services using COM. Event notifications and query processing services are supplied by COM API. COM API is also used in several programming language environments, like C and C++. The CIM Object Manager repository holds the CIM and extension schemas, and data information or data source details. CIM Object Manager uses the schema data in this repository while servicing requests from management applications for managed objects. Managed objects are either physical hardware such as a cable or logical enterprises such as database application software. Next slide will focus on CIM Object Manager.

4.14 CIM Object Manager

If the CIM Object Manager receives a request from a management application for the data unavailable in the repository, or for event notifications unsupported by the CIM Object Manager, then it forwards the request to a WMI provider. Providers supply data and event notifications for managed objects that are specific to their particular domain. The three layer model of WMI, consists of the providers, the CIM Object Manager, and the consumers of WMI information. Local or remote services of Microsoft Windows 2000 services, Standard executables or .exe files, and In-process Dynamic-Link Libraries (DLLs) are some of the supported server types used to implement a provider. Local or remote Windows 2000 services and standard executables are the most recommended server types. In the next slide, we will discuss the next technology, Intelligent Platform Management Interface.

4.15 Intelligent Platform Management Interface

Intelligent Platform Management Interface or IPMI is an open, industry-standard interface that enables an administrator to monitor, control, and retrieve information about the server infrastructure. It has manageability features like logging of system events, alerting, and system recovery. As seen in the table on the slide, administrator and operator are the two roles of IPMI. An administrator role enables read and write privileges to the management and monitoring features. The various privileges in this role are: Admin, which is represented as ‘a’; User management, represented as ‘u’; Console, represented as ‘c’; Reset and Host Console, represented as ‘r’; and Read-Only, which is represented as ‘o’. An operator role on the other hand enables some privileges but not complete privileges as that of an administrator. The various privileges are Console privilege, represented as ‘c’; Reset and Host Console, represented as ‘r’; and Read-Only, which is represented as ‘o’. For implementation of IPMI, it is essential that every system residing in infrastructure must be either an administrator or an operator. The logging system monitors and maintains system status logs. Hence, logging system gets the administrator role whereas other production servers are assigned operator role. These servers assist the troubleshooting team by finding and fixing flaws in the infrastructure. Let us discuss SMS alert service in the next slide.

4.16 SMS Alert Service

Short Message Service or SMS alert service is a cost-effective service that can be used by an administrator. An alert is sent to the user in case of a failure in the functionality of any services. It alerts the administrator by sending the SMS with the error code found in the infrastructure. An administrator can use either online SMS gateway or any device which supports Global System for Mobile communication (GSM) or Code Division Multiple Access (CDMA) technology. Normally all monitoring tools and hypervisors supports the SMS service integration. In the following slide, we will focus on measures established by organizations to assess resource utilization.

4.17 Organizational Measures

A baseline helps in creating a sample of compute resources consumed by the server over a period of time. Baseline provides the organization with a point-in-time performance chart of the server. Besides baseline establishment while monitoring a cloud environment, measuring threshold is important too. Threshold refers to maintaining higher limit beyond which the system will send an alert via SMTP or SMS to the appropriate party. It is also essential to monitor the usage of processor and other resources. Whenever an event occurs in the infrastructure, an automated response is generated and sent to the administrator using alert protocols like SMTP or SMS. An administrator monitors the usage of processor or any other resource. This is achieved by activating the alert mechanism which is provided in the hypervisor or the orchestrator. Following slide will present a scenario on network resource monitoring techniques.

4.18 Scenario on Network Resource Monitoring Techniques

An administrator wants to communicate infrastructure status information. Which one of the following protocols would you recommend? SMTP, SNMP or HTTP The solution is given in the next slide. Let us see if you got the right answer.

4.19 Scenario on Network Resource Monitoring Techniques(contd)

The right answer is SNMP. Simple Network Management Protocol can be used to communicate infrastructure status information. In the next slide, we will discuss the best practices to allocate physical resources.

4.20 Best Practices to Allocate Physical Resources

While configuring a host computer for virtualization, it is necessary to ensure the compute resources are monitored and maintained to avoid any availability issues. Compute resources can best be defined as the resources that are required for the delivery of virtual machines like processor, RAM, network, etc. While allocating CPU to a guest machine, 20 percent of the physical hardware should be dedicated to host performance to avoid saturation. While allocating memory, critical processes should have access to the main memory to avoid availability issues. Storage and network must be kept as optional when it comes to guest OS to allot only the required resources. Since physical resources are limited, cloud service provider should ensure that only certain amount of resources are allotted. This is achieved using quotas and limits. In the following slide, we will discuss limits and quotas in detail.

4.21 Limits and Quotas

Limit is the maximum threshold that can be used by guest OS and quotas are the total amount of resources that can be utilized by a system. Limit is of two types. A hard limit is the maximum amount of resources that can be utilized after which a user cannot store anything in the disk. While a soft limit allows a user to store data even if the maximum limit is reached. However in both the processes, user will be notified through alerts. Reservations enforce a lower limit for the amount of resources guaranteed to cloud consumers for their virtual machines. Before deploying a virtualization host and choosing a vendor, an organization must read the license agreements and determine the features needed and how those features are licensed. A virtual machine requires a software license as well. Resource pools are like a part of the total resources contained in a physical machine. Resource pooling provides the best flexible mechanism to organize compute resources in a virtual environment and link them to their underlying physical resources. Next slide will look at a scenario to allocate virtual resources.

4.22 Scenario on Allocating Virtual Resources

An administrator is creating a new VM in the server. While deploying the new VM, he receives reports from other users that they are facing performance issues in terms of speed. What do you think could be the reason? Misconfiguration while pooling resources, Hard disk failure or Hypervisor failure Let us look at the answer given on the next slide.

4.23 Scenario on Allocating Virtual Resources(contd)

The reason is misconfiguration while pooling resources. The administrator may not have correctly configured resource pooling parameters while creating the new VM. Next, we will discuss the best practices for device virtualization for guest OS.

4.24 Best Practices for Usage of Emulated Devices

A hypervisor can provide emulated or para-virtualized devices to guest operating systems. One of the many benefits of emulated devices is that they do not require special drivers and modifications to the operating system. Paravirtualization require the modification of guest operating system to communicate with the hypervisor. Emulated solutions provide broader compatibility level as their devices adhere to the same requirements as the real hardware they emulate. Emulated devices that support I/O, networking, graphics, mouse input, serial port, and sound cards are meant for the enablement of guest operating systems. These devices are not currently targeted for para-virtualized performance improvements. We will discuss the best practices for usage of paravirtualized devices in the next slide.

4.25 Best Practices for Usage of Paravirtualized Devices

Compared to emulated devices, paravirtualization require the modification of guest operating system to communicate with the hypervisor. They provide lower latency and higher throughput for the input-output operations of guest operating systems. Paravirtualized devices require fewer processor resources than the emulated devices, leading to optimal resource utilization. Also, paravirtualized solutions outperform emulated solutions as they use standard API for I/O operations of guest operating systems. We will discuss the best practices for storage devices in the next slide.

4.26 Best Practices for Storage Devices

A guest OS that uses block devices for local mass storage usually performs better than the one that uses disk image files. Disk image files refer to the default templates of an operating system. Guest OS that uses block devices achieves lower latency and higher throughput. When an I/O request is targeted to the local storage of a guest operating system, the I/O request should pass through the file system and the I/O subsystem of the guest operating system. Then, the hypervisor completes the I/O request similar to the way it completes an I/O request for other processes running within the Linux operating system. Requirements to be considered while using block devices are: All block devices should be available and accessible to the hypervisor. Otherwise, the guest OS cannot access those devices that are not available from the hypervisor. Block devices should be activated before using them. For example, Logical Volume Manager (LVM) logical volumes, and Multiple Device (MD) arrays must be running in order to use the exported devices. In the next slide, we will look into best practices for over-committing processor and memory resources.

4.27 Best Practices for Over Committing Processor and Memory Resources

In virtualization, many guest operating systems can be run simultaneously on one system. When one system is loaded with the workloads of many, it leads to optimal utilization of resources. From an acquisition and operation perspective, idle computing resources prove uneconomical. To achieve the overall system use at 80% or lower, guest OS can run with some inactive virtual processors, making it possible to over-commit processor resources. To maximize the performance, allocate minimum amount of virtual processors in guest OS, avoiding scaling issues. The effect of these scaling issues can be reduced by tuning Kernel Virtual Machine or KVM. Page sharing and ballooning outperform swapping. The primary goal is to over-commit memory with the minimum negative effect on performance. Managing the complexities of the workload consolidation proves to be a challenge. Some workloads strain the storage subsystem while other workloads strain the network. Sometime, workloads are active during the day as well as night. Over-committing the processor and memory resources, and controlling the ability of the guest OS to access those resources helps in managing this diversity. In the next slide, we will look at a scenario on physical resource redirection and mapping.

4.28 Scenario on Physical Resource Redirection and Mapping

An administrator is configuring access to hypervisors to allow them to communicate in the event the management network is down. What is this type of access called? NIC teaming, Serial port mapping, or HTTPs Let us look at the answer given on the next slide.

4.29 Scenario on Physical Resource Redirection and Mapping(contd.)

The answer is Serial port mapping. Serial port mapping refers to configuring access to hypervisors to allow them to communicate even when the physical network is down. In the next slide, we will analyze a scenario on connecting external devices.

4.30 Scenario on Connecting External Devices

An administrator is configuring access to a printer for hypervisors, to allow them to perform printing. What is this type of access called? NIC teaming Serial-port mapping Parallel port mapping Let us look at the answer given on the next slide.

4.31 Scenario on Connecting External Devices(contd.)

The answer is Parallel port mapping. Parallel-port mapping enables admin to connect physical peripherals to the hypervisor. In the following slide, we will focus on best practices for networking.

4.32 Best Practices for Networking

Networking in virtualization is considered as one of the major challenges in IT world, as it essentially requires monitoring of network performance. Any misconfiguration in the network setup can lead to downtime in the production environment. It is a preferred practice to use virtual switches present in the hypervisor with a single physical port, using VLAN feature in the physical switch. It is also preferred to use virtual firewall and perform default security policies. For example, use the vShield in the VMware VSphere setup. In the next slide, we will look into the tools for remote access.

4.33 Remote Access Tools

Remote access is the ability to access a network or a computer from a remote location. The tools required for remote access are Remote Desktop Protocol, Secure Shell, and Hypertext Transfer Protocol. We will discuss the first tool which is remote desktop protocol, in the next slide.

4.34 Remote Desktop Protocol

Remote Desktop Protocol or RDP is a secure network communications protocol for Windows-based applications running on a server. It is secure because the data is encrypted. RDP allows the network administrator to view the desktop of the client system to perform any troubleshooting. Some of the properties of RDP include: smart-card authentication, resource sharing, data sharing, ability to use multiple displays, and performing redirection functions like printing. RDP can support up to 64,000 independent channels for data transmission. However it lacks centralized administration feature. The default port number for RDP is 3389. We will learn about secure shell in the next slide.

4.35 Secure Shell

SSH or Secure Shell tunnel can transfer unencrypted traffic over a network through an encrypted channel. For instance, one can use the SSH tunnel to securely transfer files between an FTP server and a client, even though the FTP protocol itself is not encrypted. SSH tunnels also provide a means to bypass firewalls that prohibits or filters certain internet services. For example, an organization will block and monitor certain sites using their proxy filter. In such a case, users can connect to an external SSH server, and create an SSH tunnel to forward a given port on their local machine to port 80, on a remote web-server via the external SSH server. To set up an SSH tunnel, port of one machine needs to be forwarded to a port in the other machine, which will be at the other end of the tunnel. Once the SSH tunnel has been established, the user can connect to earlier specified port at the first machine to access the network service. SSH works on port number 22. A console port allows an administrator to use a cable to connect directly to a hypervisor of host computer or a virtual machine. We will focus on the popular hypertext transfer protocol in the next slide.

4.36 Hypertext Transfer Protocol

"The Hypertext Transfer Protocol or HTTP is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless protocol which can be used for many tasks beyond its use for hypertext, such as: name servers and distributed object management systems, through extension of its request methods, error codes, and headers." HTTP is an asymmetric request-response client-server protocol. An HTTP client sends a request message to an HTTP server. The server returns the response message. In other words, HTTP is a pull protocol wherein the client pulls information from the server, instead of server pushing for information to the client. As current requests do not know about the previous ones, HTTP is considered a stateless protocol. As illustrated in the slide, the request response cycle normally occurs in the HTTP environment. The HTTP Client system initially provides request to the web server. The request is passed in the form of URL or Uniform Resource Locator to the HTTP server. It is further processed and generated as an output HTML page, and provides response to the HTML page. HTML is Hypertext markup language. HTTP permits negotiating of data type and representation, so as to allow systems to be built independently of the data being transferred. In the next slide, we will discuss a scenario on remote access.

4.37 Scenario on Remote Access

An administrator wants to deliver VM access to users. Which of the following protocols would you suggest to provide remote desktop access? RDP, SMTP, or WMI Let us look at the next slide for the answer.

4.38 Scenario on Remote Access(contd.)

The answer is RDP. RDP or Remote Desktop Protocol can be used to provide remote desktop access to the VM. This brings us to the end of this module. Following slides are dedicated to the quiz section, which will help us understand this module better. Let us move on to the quiz questions to check your understanding of the concepts covered in this module.

4.40 Summary

Here is a quick recap of what was covered in the module: SNMP, WMI, IPMI, and Syslog services are the tools that can be used for network resource monitoring. Syslog is a standard for maintaining log information. It is preferred to use block devices in VM Storage. Various remote access tools that can be used are RDP, SSH, and HTTP.

4.41 Tahnk You

In the next module, we will focus on implementing security in cloud production environment in detail.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Request more information

For individuals
For business
Name*
Email*
Phone Number*
Your Message (Optional)
We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Email*
Phone Number*
Company*
Job Title*