It is worth noting that when discussing VMware vSphere Best Practices, these practices are usually categorized depending on which facet of the software you are keen on improving. There are thus a variety of best practices such as Performance Best Practices, High Availability (HA) Performance Best Practices, and Auto Deploy Best Practices amongst a host of other available best practice guidelines.
Learn about the AWS architectural principles and services like IAM, VPC, EC2, EBS and more with the AWS Solutions Architect Course. Register today.

Performance Best Practice

For performance best practice guidelines, it is important to first start by ensuring that:

  • All the hardware you use in the system is in the hardware compatibility list for that specific version of VMware Software.
  • Make sure that the hardware of choice does meet the minimum configuration requirements that are supported by the VMware software.
  • It is also considered as a best practice to test system memory for 72 hours to allow you to check for any hardware errors.
  • When considering the CPU, it is essential to get a CPU that’s compatible with VMware vMotion requirements which have a direct bearing on the DRS (Distributed Resource Scheduler).
  • You should also consider the compatibility of the CPU with the associated VMware Fault tolerance.
In terms of processors, most recent processors from both AMD and Intel do include specific hardware features that are geared towards assisting virtualization. Even though the first generation processors did introduce CPU virtualization, the VT-x from Intel and the AMD- V from AMD, things have since changed. For best performance, you are well-advised to use the second generation processors which have an additional Memory Management Unit (MMU) virtualization. These include the AMD RVI (rapid virtualization indexing) processor and the Intel EPT (Extended page tables). 

It is worth noting that there is an even newer I/O memory management feature in current processors which allows virtual machines to have direct access to input and output devices such as storage controllers and network cards. In Intel processors, this feature is known as VT-d (Virtualization Technology for Directed input /output). In AMD processors this feature is called IOMMU or AMD-Vi (AMD I/O Virtualization).

Back end storage configuration has been known to affect performance, and most of the times significantly; instances of lower than expected storage performance are usually as a result of configuration issues. Storage performance does depend on a variety of factors such as the workload, the cache size, the hardware, the vendor used, the stripe size, and the RAID level amongst a host of other activities. Keeping in mind that many workloads are significantly sensitive to the latency of I/O operations, the importance of having storage devices configured correctly cannot be overemphasized. When choosing hardware for this purpose, it is advisable to choose storage hardware that supports VAAI (VMware vStorage APIs for Array integration) to offload some of the operations to the storage hardware rather than performing them in ESXi and improve storage scalability.

High Availability (HA) Performance Best Practice 

This software does make it less expansive and simpler to provide higher levels of availability for very important and critical applications. It does allow organizations to very cost-effectively increase the baseline level of availability provided for all the applications. One of the key best practices is to eliminate the single points of failure. This can be achieved by building redundancy at vulnerable points to help eliminate or reduce the downtime caused by hardware failures. These redundancies should be in these four layers, namely; server components such as host bus adapters and network adaptors, networking components, storage networking, and storage arrays and servers including rack power supplies, blade chassis, and blades.

When deploying or building a vSphere High Availability (HA) cluster, it is usually considered the best practice to build the cluster out of identical server hardware as this greatly simplifies the management and configuration of servers using available host profiles and also reduces resource fragmentation and increases the ability to handle server failures. The use of drastically different hardware in a cluster does lead to an unbalanced cluster which makes the cluster less productive.
Do you wish to become a cloud expert? Gain the right skills with our Cloud Computing Certification Course and excel in your career, starting today!

It is also important to take into consideration the overall size of the cluster. It is known that smaller sized clusters do require a larger relative percentage of all the available cluster resources set aside as reserve capacity to handle failures adequately. Always keep in mind that a cluster of only three nodes will require at least 33% of the cluster resources to be held on reserve for failover whereas a cluster of say ten nodes will only require 10% of cluster resources reserved for failover. It should be noted though that the complexity of the cluster does increase considerably.For those users who are implementing earlier versions of vSphere, the use of secondary and primary hosts with a limit of five primary hosts should be implemented as part of the VMware vSphere best practices. The vSphere 5.0 though does do away with this requirement and introduces the master-slave relationship amongst the nodes of a cluster. One node must, therefore, be assigned the role of a master and incase it fails; another node is selected in an election process.

When taking into consideration about the network design, it is essential to note that the two main areas where best practice does come in sharp focus are on increasing the resiliency of the client-side networking and increasing the resiliency of communication channels used by the HA itself. If the switches in the physical network that connect the servers support PortFast or an equivalent setting, then this should be enabled. This allows the host to regain connectivity after booting quickly. It is also recommended that host monitoring is disabled anytime some network maintenance capable of disabling heartbeat paths between hosts in that particular cluster since this can trigger an isolation response.

It is also important to ensure that the TCP/UDP port 8182 is always open for all firewalls and network switches which are used by the hosts for any inter-hosts communication. It should be noted that even though vSphere HA will automatically open these ports when enabled and close them when disabled, user action is normally required if firewalls exist between hosts in a cluster such as in what is known as a stretched cluster configuration.

In environments where both IPv4 and IPv6 protocols are in use, it is considered VMware vSphere Best Practice to configure the distributed switches on all the hosts to enable access to both networks when required. This does prevent the possibility of encountering network partition issues which may be caused due to host failure or loss of a single IP networking stack.

To enhance the overall network availability, it is advisable to configure along with heartbeat data stores the redundant management networking from ESXi hosts to other networking switching hardware. The use of network adaptor teaming is also preferable.

Our Cloud Computing Courses Duration and Fees

Cloud Computing Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees
Post Graduate Program in Cloud Computing

Cohort Starts: 15 May, 2024

8 Months$ 4,500
AWS Cloud Architect11 Months$ 1,299
Cloud Architect11 Months$ 1,449
Microsoft Azure Cloud Architect11 Months$ 1,499
Azure DevOps Solutions Expert6 Months$ 1,649

Learn from Industry Experts with free Masterclasses

  • Discover The Anatomy of Cloud Computing in 2024

    Cloud Computing

    Discover The Anatomy of Cloud Computing in 2024

    20th Mar, Wednesday9:00 PM IST
  • Uncover Your Fastest Route to a Cloud Computing Career in 2024 with Caltech University

    Cloud Computing

    Uncover Your Fastest Route to a Cloud Computing Career in 2024 with Caltech University

    22nd Feb, Thursday9:00 PM IST
  • Career Masterclass: Discover Why Cloud Computing is the Right Career  for You in 2024?

    Cloud Computing

    Career Masterclass: Discover Why Cloud Computing is the Right Career for You in 2024?

    2nd Nov, Thursday9:00 PM IST
prevNext