Harold Seward discovered Counting Sort in 1954. Counting sort is a linear sorting algorithm with asymptotic complexity O(n+k). The Counting Sort method is a fast and reliable sorting algorithm. Counting sort, unlike bubble and merge sort, is not a comparison-based algorithm. It avoids comparisons and takes advantage of the array's O(1) time insertions and deletions.

The Counting Sort algorithm sorts keys that are small integers and fall inside a particular range. It works by calculating the number of elements with each unique key value. After that, it performs specific mathematical calculations to sort each key value.

## What Is a Counting Sort Algorithm?

- Counting sort is an integer sorting algorithm used in computer science to collect objects according to keys that are small positive integers.
- It works by determining the positions of each key value in the output sequence by counting the number of objects with distinct key values and applying prefix sum to those counts.
- Because its running duration is proportional to the number of items and the difference between the maximum and minimum key values, it is only suited for direct usage when the number of items is not much more than the variation in keys.
- It's frequently used as a subroutine in radix sort, a more efficient sorting method for larger keys.

In the next section, you will discover the working procedure of the counting sort algorithm after knowing what it is.

## Working on Counting Sort Algorithm

Iterating through the input, counting the number of times each item appears, and utilizing those counts to compute each item's index in the final, sorted array is how counting sort works.

- Consider a given array that needs to be sorted. First, you’ll have to find the largest element in the array and set it to the max.

- To store the sorted data, you will now initialize a new count array with length "max+1" and all elements set to 0.

- Later, as shown in the figure, you will store elements of the given array with the corresponding index in the count array.

- Now, you will change the count array by adding the previous counts to produce the cumulative sum of an array, as shown below:

- Because the original array has nine inputs, you will create another empty array with nine places to store the sorted data, place the elements in their correct positions, and reduce the count by one.

- As a result, the sorted array is:

## Pseudo Code and Algorithm of Counting Sort Algorithm

### The Pseudocode of Counting Sort

- Iterate through the input array to find the highest value.
- Declare a new array with the value 0 and a size of max+1.
- Count each element in the array and increment its value in the auxiliary array generated at the corresponding index.
- Add current and previous frequency to the auxiliary array to find the cumulative sum.
- The cumulative value now represents the element's actual position in the sorted input array.
- Begin iterating through the auxiliary array from 0 to max.
- Put 0 at the corresponding index and reduce the count by 1, which will indicate the element's second position in the input array if it exists.
- Now put the array you got in the previous step into the actual input array.

### The Algorithm of Counting

Counting_Sort( array, ele ) // ele is number of elements in the array max <- discover the array's biggest element Create a count array of maximum + 1 size and fill it with all 0s. for i = 0 to ele discover the total number of occurrences of each element and save the count in the count array at the index. for i = 0 to max Add the current(i) and previous(i-1) counts to get the cumulative sum, which you may save in the count array. For j = ele down to 1 Decrement the count of each element copied by one before copying it back into the input array. |

After learning pseudocode and the counting sort method, you will now look at its complexity in this article.

## The Complexity of the Counting Sort Algorithm

### Counting Sort Time Complexity

- It takes time to discover the maximum number, say k.
- Initializing the count array will take k seconds.
- k times to preserve the count array.
- Now you will do the actual sorting by iterating over the input array linearly.
- Because all preceding stages are constant regardless of the input array, the best, average, and worst time complexity will remain constant.

#### Best Case

When all items are in the same range, or when k is equal to 1, the best case time complexity occurs. In this scenario, counting the occurrences of each element in the input range takes constant time, and finding the correct index value of each element in the sorted output array takes n time, resulting in total time complexity of O(1 + n), i.e. O(n), which is linear.

#### Worst Case

The worst-case scenario for temporal complexity is skewed data, meaning that the largest element is much larger than the other elements. This broadens the range of K.

Because the algorithm's time complexity is O(n+k), when k is of the order O(n^2), the time complexity becomes O(n+(n2)), which essentially lowers to O(n^2 ). Where k is of the order O(n^3), the time complexity becomes O(n+(n^3)), which essentially lowers to O(n^3). As a result, the time complexity increased in this scenario, making it O(k) for such big values of k. And that's not the end of it. For larger values of k, things can get significantly worse.

As a result, the worst-case time complexity occurs when the range k of the counting sort is large.

#### Average Case

To calculate the average case time complexity, fix N and take various values of k from 1 to infinity; in this scenario, k computes to (k+1/2), and the average case is N+(K+1)/2. However, because K approaches infinity, K is the essential element. Similarly, varying N reveals that both N and K are equally dominating, resulting in O(N+K) as the average case.

### Counting Sort Space Complexity

You employed an auxiliary array C of size k in the above procedure, where k is the maximum element in the given array. As a result, the Counting Sort algorithm has an O space complexity (k).

#### The Complexity of Space: O (k)

The greater the range of elements in a particular array, the greater the space complexity; thus, counting sort's space complexity is terrible if the range of integers is very big, as an auxiliary array of that size must be created.

You will now compare counting algorithms with various sorting methods in this tutorial.

## Comparison With Other Sorting Algorithms

- When the length of the input list is not substantially smaller than the largest key value, k, in the input array, the counting sort has a running time of O(n). In contrast, any comparison-based sorting algorithm takes O(n (log n)) comparisons.
- In circumstances where the range of input elements is comparable to the number of input elements, counting sort is particularly efficient since it accomplishes sorting in linear time, which might be an advantage over other sorting algorithms like quicksort. When rapid sort takes O(n2) time in the worst scenario, counting sort only takes O(n) time if the range of elements is not too vast.
- The majority of sorting algorithms run in quadratic time (O(n2), except the heap, and merge sort, which runs in time (O(n log n). The only sorting method that works in linear time is counting sort.
- Because no items are compared, it is superior to comparison-based sorting approaches.
- Because counting sort is good for sorting well-defined, finite, and tiny numbers, it can be used as a subprogram in other sorting algorithms like radix sort, which is suitable for sorting numbers with a wide range.

Moving ahead in this article, you will look now at several uses of the counting sort algorithm.

## Applications of Counting Sort Algorithm

- If the range of input data is not much bigger than the number of objects to be sorted, counting sort is efficient. Consider the following scenario: the data is 10, 5, 10K, 5K, and the input sequence is 1 to 10K.
- It isn't a sorting system based on comparisons. It has an O(n) running time complexity, with space proportional to the data range.
- It's frequently used as a subroutine in other sorting algorithms, such as radix sort.
- Counting sort counts the occurrences of the data object in O using partial hashing (1).
- The counting sort can also be used with negative inputs.

Finally, in this tutorial, you will implement code for counting sort.

## Code Implementation of Counting Sort Algorithm

#include <stdio.h> #include<stdlib.h> #include<conio.h> void counting_sort(int Array[], int k, int n) { int i, j; int Array1[10], Array2[100]; for (i = 0; i <= k; i++) Array2[i] = 0; for (j = 1; j <= n; j++) Array2[Array[j]] = Array2[Array[j]] + 1; for (i = 1; i <= k; i++) Array2[i] = Array2[i] + Array2[i-1]; for (j = n; j >= 1; j--) { Array1[Array2[Array[j]]] = Array[j]; Array2[Array[j]] = Array2[Array[j]] - 1; } printf("Sorted array is : "); for (i = 1; i <= n; i++) printf("%d ", Array1[i]); } int main() { int n, k = 0, Array[15], i; printf("Enter the number of elements : "); scanf("%d", &n); printf("\nEnter the elements which are going to be sorted :\n"); for (i = 1; i <= n; i++) { scanf("%d", &Array[i]); if (Array[i] > k) { k = Array[i]; } } counting_sort(Array, k, n); printf("\n"); return 0; } |

### Output

With this, you have come to an end of counting sort algorithm articles.

Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now!

## Next Steps

In this tutorial, you learned what a counting sort algorithm is and some applications and code implementations of the counting sort algorithm.

The Post Graduate Program in Full Stack Web Development from Simplilearn offered in collaboration with Caltech CTME will be ideal for you if you want a more comprehensive study that goes beyond Data Structure and algorithms and covers the most in-demand programming languages and skills today. It's time to get out there and explore.

Do you have any questions about this Counting Sort Algorithm tutorial? Please leave them in the comments section at the bottom of this page if you have any. Our experts will respond as quickly as possible!