K-Means Clustering Algorithm: Applications, Types, Demos and Use Cases

Every Machine Learning engineer wants to achieve accurate predictions with their algorithms. Such learning algorithms are generally broken down into two types - supervised and unsupervised. K-means clustering is one of the unsupervised algorithms where the available input data does not have a labeled response. 

Before diving further into the concepts of clustering, let us check out the topics to be covered in this article:

  • Types of clustering
  • What is k-means clustering?
  • Applications of k-means clustering
  • Common distance measure
  • How does k-means clustering work?
  • K-Means clustering algorithm
  • Demo: k-means clustering
  • Use Case: color compression

Types of Clustering

Clustering is a type of unsupervised learning wherein data points are grouped into different sets based on their degree of similarity. There are primarily two categories of clustering:

  • Hierarchical clustering
  • Partitioning clustering 

Hierarchical clustering is further subdivided into:

  • Agglomerative clustering
  • Divisive clustering

Partitioning clustering is further subdivided into:

  • K-Means clustering 
  • Fuzzy C-Means clustering 

Hierarchical Clustering

Hierarchical clustering uses a tree-like structure, like so:

hierarchical clustering

In agglomerative clustering, there is a bottom-up approach. We begin with each element as a separate cluster and merge them into successively more massive clusters, as shown below:

clustering-slide19

Divisive clustering is a top-down approach. We begin with the whole set and proceed to divide it into successively smaller clusters, as you can see below:

clustering slide20

Partitioning Clustering 

Partitioning clustering is split into two subtypes - K-Means clustering and Fuzzy C-Means.

In k-means clustering, the objects are divided into several clusters mentioned by the number ‘K.’ So if we say K = 2, the objects are divided into two clusters, c1 and c2, as shown:

clustering-slide21

Here, the features or characteristics are compared, and all objects having similar characteristics are clustered together. 

Fuzzy c-means is very similar to k-means in the sense that it clusters objects that have similar characteristics together. In k-means clustering, a single object cannot belong to two different clusters. But in c-means, objects can belong to more than one cluster, as shown. 

clustering-slide22

What is K-Means Clustering?

K-Means clustering is an unsupervised learning algorithm. There is no labeled data for this clustering, unlike in supervised learning. K-Means performs division of objects into clusters that share similarities and are dissimilar to the objects belonging to another cluster. 

The term ‘K’ is a number. You need to tell the system how many clusters you need to create. For example, K = 2 refers to two clusters. There is a way of finding out what is the best or optimum value of K for a given data. 

For a better understanding of k-means, let's take an example from cricket. Imagine you received data on a lot of cricket players from all over the world, which gives information on the runs scored by the player and the wickets taken by them in the last ten matches. Based on this information, we need to group the data into two clusters, namely batsman and bowlers. 

Let's take a look at the steps to create these clusters.

Solution:

Assign data points

Here, we have our data set plotted on ‘x’ and ‘y’ coordinates. The information on the y-axis is about the runs scored, and on the x-axis about the wickets taken by the players. 

If we plot the data, this is how it would look:

plot data

Perform Clustering

We need to create the clusters, as shown below:

perform-clustering

cluster2

Considering the same data set, let us solve the problem using K-Means clustering (taking K = 2).

The first step in k-means clustering is the allocation of two centroids randomly (as K=2). Two points are assigned as centroids. Note that the points can be anywhere, as they are random points. They are called centroids, but initially, they are not the central point of a given data set.

centroids

The next step is to determine the distance between each of the data points from the randomly assigned centroids. For every point, the distance is measured from both the centroids, and whichever distance is less, that point is assigned to that centroid. You can see the data points attached to the centroids and represented here in blue and yellow.

centroids2

The next step is to determine the actual centroid for these two clusters. The original randomly allocated centroid is to be repositioned to the actual centroid of the clusters.

clusters3

This process of calculating the distance and repositioning the centroid continues until we obtain our final cluster. Then the centroid repositioning stops.

centroidsr

As seen above, the centroid doesn't need anymore repositioning, and it means the algorithm has converged, and we have the two clusters with a centroid. 

Applications of K-Means Clustering

K-Means clustering is used in a variety of examples or business cases in real life, like:

  • Academic performance 
  • Diagnostic systems 
  • Search engines 
  • Wireless sensor networks

Academic Performance

Based on the scores, students are categorized into grades like A, B, or C. 

Diagnostic systems

The medical profession uses k-means in creating smarter medical decision support systems, especially in the treatment of liver ailments.

Search engines

Clustering forms a backbone of search engines. When a search is performed, the search results need to be grouped, and the search engines very often use clustering to do this. 

Wireless sensor networks

The clustering algorithm plays the role of finding the cluster heads, which collects all the data in its respective cluster.

Distance Measure 

Distance measure determines the similarity between two elements and influences the shape of clusters.

K-Means clustering supports various kinds of distance measures, such as: 

  • Euclidean distance measure
  • Manhattan distance measure 
  • A squared euclidean distance measure
  • Cosine distance measure 

Euclidean Distance Measure 

The most common case is determining the distance between two points. If we have a point P and point Q, the euclidean distance is an ordinary straight line. It is the distance between the two points in Euclidean space. 

The formula for distance between two points is shown below:

euclidean

Squared Euclidean Distance Measure

This is identical to the Euclidean distance measurement but does not take the square root at the end. The formula is shown below:

squared eu

Manhattan Distance Measure

The Manhattan distance is the simple sum of the horizontal and vertical components or the distance between two points measured along axes at right angles.

Note that we are taking the absolute value so that the negative values don't come into play. 

The formula is shown below:

manhattan distance

Cosine Distance Measure

In this case, we take the angle between the two vectors formed by joining the points from the origin. The formula is shown below:

cosine-distance

How Does K-Means Clustering Work?

The flowchart below shows how k-means clustering works:

slide32

The goal of the K-Means algorithm is to find clusters in the given input data. There are a couple of ways to accomplish this. We can use the trial and error method by specifying the value of K (e.g., 3,4, 5). As we progress, we keep changing the value until we get the best clusters. 

Another method is to use the Elbow technique to determine the value of K. Once we get the value of K, the system will assign that many centroids randomly and measure the distance of each of the data points from these centroids. Accordingly, it assigns those points to the corresponding centroid from which the distance is minimum. So each data point will be assigned to the centroid, which is closest to it. Thereby we have a K number of initial clusters.

For the newly formed clusters, it calculates the new centroid position. The position of the centroid moves compared to the randomly allocated one.

Once again, the distance of each point is measured from this new centroid point. If required,

the data points are relocated to the new centroids, and the mean position or the new centroid is calculated once again. 

If the centroid moves, the iteration continues indicating no convergence. But once the centroid stops moving (which means that the clustering process has converged), it will reflect the result.

Let's use a visualization example to understand this better. 

We have a data set for a grocery shop, and we want to find out how many clusters this has to be spread across. To find the optimum number of clusters, we break it down into the following steps:

Step 1:

The Elbow method is the best way to find the number of clusters. The elbow method constitutes running  K-Means clustering on the dataset.

Next, we use within-sum-of-squares as a measure to find the optimum number of clusters that can be formed for a given data set. Within the sum of squares (WSS) is defined as the sum of the squared distance between each member of the cluster and its centroid.

slide34

The WSS is measured for each value of K. The value of K, which has the least amount of WSS, is taken as the optimum value. 

Now, we draw a curve between WSS and the number of clusters. 

slide35

Here, WSS is on the y-axis and number of clusters on the x-axis.

You can see that there is a very gradual change in the value of WSS as the K value increases from 2. 

So, you can take the elbow point value as the optimal value of K. It should be either two, three, or at most four. But, beyond that, increasing the number of clusters does not dramatically change the value in WSS, it gets stabilized. 

Step 2:

Let's assume that these are our delivery points:

delivery points

We can randomly initialize two points called the cluster centroids.

Here, C1 and C2 are the centroids assigned randomly. 

Step 3:

Now the distance of each location from the centroid is measured, and each data point is assigned to the centroid, which is closest to it.

This is how the initial grouping is done:

initial grouping

Step 4: 

Compute the actual centroid of data points for the first group.

Step 5:

Reposition the random centroid to the actual centroid.

random centranoid

Step 6: 

Compute the actual centroid of data points for the second group.

Step 7:

Reposition the random centroid to the actual centroid.

actual centroid

Step 8: 

Once the cluster becomes static, the k-means algorithm is said to be converged. 

The final cluster with centroids c1 and c2 is as shown below:

final centroid

K-Means Clustering Algorithm 

Let's say we have x1, x2, x3……… x(n) as our inputs, and we want to split this into K clusters. 

The steps to form clusters are:

Step 1: Choose K random points as cluster centers called centroids. 

Step 2: Assign each x(i) to the closest cluster by implementing euclidean distance (i.e., calculating its distance to each centroid)

Step 3: Identify new centroids by taking the average of the assigned points.

Step 4: Keep repeating step 2 and step 3 until convergence is achieved

Let's take a detailed look at it at each of these steps.

Step 1:

We randomly pick K (centroids). We name them c1,c2,..... ck, and we can say that 

centroid

Where C is the set of all centroids.

Step 2:

We assign each data point to its nearest center, which is accomplished by calculating the euclidean distance.

centroid slide44

Where dist() is the Euclidean distance.

Here, we calculate the distance of each x value from each c value, i.e. the distance between x1-c1, x1-c2, x1-c3, and so on. Then we find which is the lowest value and assign x1 to that particular centroid.

Similarly, we find the minimum distance for x2, x3, etc. 

Step 3: 

We identify the actual centroid by taking the average of all the points assigned to that cluster. 

centroid slide 45

Where Si is the set of all points assigned to the ith cluster.     

It means the original point, which we thought was the centroid, will shift to the new position, which is the actual centroid for each of these groups. 

Step 4:

Keep repeating step 2 and step 3 until convergence is achieved.

Demo: K-Means Clustering

Problem Statement - Walmart wants to open a chain of stores across the state of Florida, and it wants to find the optimal store locations to maximize revenue.

The issue here is if they open too many stores close to each other, they will not make a profit. But, if the stores are too far apart, they do not have enough sales coverage. 

Solution - An organization like Walmart is an e-commerce giant. They already have the addresses of their customers in their database. So they can use this information and perform K-Means Clustering to find the optimal location.

You can see how it is performed in Python notebook in the video accompanying this article.

Explore the concepts of Machine Learning and understand how it’s transforming the digital world with the Machine Learning Certification Course. Enroll now!

Conclusion 

Considered as the Job of the future, Machine Learning engineers are in-demand as well as highly paid. A report by Indeed reveals that there is a remarkable increase in the demand for Machine Learning engineers worldwide. So, why restrict your learning to merely clustering? Enroll in a Machine Learning Certification Course and expand your knowledge in the broader concepts of Machine Learning. Get certified and become a part of the Artificial Intelligence talent that companies constantly look forward to.

About the Author

SimplilearnSimplilearn

Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.