An algorithm's space and time complexity can be used to determine its effectiveness. While you are aware that there are multiple ways to address an issue in programming, understanding how an algorithm works efficiently can add value to your programming. To determine the efficacy of a program or algorithm, understanding how to evaluate them using Space and Time complexity can help the program perform optimally under specified conditions. As a result, you become more efficient programmers.

## What Is Complexity?

Complexity measures how the resources (in this example, time) fluctuate as the problem grows in size. An algorithm may run quickly and show no time difference, but when the input size rises, the program may take longer to execute, become sluggish, and perform poorly; here is where complexity is assessed.

Having a sound briefing on the technical definitions, have a better look at Big-O Notation.

## What Are Big-O Notations?

- In theoretical terms, Big – O notation is used to examine the performance/complexity of an algorithm.
- Big – O notation examines an algorithm's upper bound of performance, i.e. its worst-case behavior.
- Big –O notation also considers asymptotic algorithm behavior, which refers to the method's performance when the amount of the input climbs to extremely big.
- Computational complexity asymptote O(f) measures the order of employed resources based on the magnitude of the input data (CPU time, RAM, etc.).

Now, you will explore the Various Time Complexities.

## Various Time Complexities

There are five types of Time complexity Cases:

- Constant Time Complexity - O(1)
- Logarithmic Time Complexity - O(log n)
- Linear Time Complexity - O(n)
- O(n log n) Time Complexity
- Quadratic Time Complexity - O(n2)

Now, see these in detail.

## Constant Time Complexity - O(1)

If the method's time does not vary and remains constant as the input size increases, the algorithm is said to have O(1) complexity. The algorithm is not affected by the size of the input. It takes a fixed number of steps to complete a particular operation, and this number is independent of the quantity of the input data.

Code:

using System;

namespace DSAComplexity

{

class Program

{

static void ConstantTimeComplexity()

{

int a = 100;

int b = 50;

int sum = a + b;

Console.WriteLine(sum);

}

static void Main(string[] args)

{

ConstantTimeComplexity();

}

}

This was the implementation of Constant Time Complexity. Next, you will see O(log n) Time Complexity.

## Logarithmic Time Complexity - O(log n)

A method with a logarithmic complexity (where n is really big) divides the issue into little bits for each iteration (log n). It takes log(N) steps to execute a particular operation on N items, where the logarithm base is usually 2. Because the logarithm base is not critical to the order of the operation count, it is frequently ignored.

Code:

// C# implementation of iterative Binary Search

using System;

namespace DSAComplexity{

class program {

//This will return index of x if it is present in arr[],

// else return -1

static int binarySearch(int[] arr, int x)

{

int l = 0, r = arr.Length - 1;

while (l <= r) {

int m = l + (r - l) / 2;

// Check if x is present at mid

if (arr[m] == x)

return m;

// If x greater, ignore left half

if (arr[m] < x)

l = m + 1;

// If x is smaller, ignore right half

else

r = m - 1;

}

// if we reach here, then element was

// not present

return -1;

}

// Driver method to test above

public static void Main()

{

int[] arr = { 2, 3, 4, 10, 40 };

int n = arr.Length;

int x = 10;

int result = binarySearch(arr, x);

if (result == -1)

Console.WriteLine("Element not present");

else

Console.WriteLine("Element found at " +"index " + result);

}

}

}

That was the implementation of O(log n) Time Complexity, and now move on to look at Linear Time Complexity.

## Linear Time Complexity - O(n)

When an algorithm executes n steps for input size n (where n is very large) and the time consumed by the process changes linearly as the input size rises, the algorithm is said to have O complexity (n). To execute an operation on N items it takes about the same number of steps as the number of elements. Linear complexity denotes a relationship between the number of components and the number of steps.

Code:

// C# implementation of Linear Time Complexity

using System;

namespace DSAComplexity

{

class program

{

// Driver method to test above

public static void Main()

{

string[] cloth={“Tie”,“Shirt”,“Pants”,“Coat”,“Shorts”};

foreach (string i in cloth)

{

Console.WriteLine(i);

}

}

}

}

So that was the implementation of Linear Time Complexity. Next, you will explore O(n log n) Time Complexity.

## O(n log n) Time Complexity

An algorithm with an O(n log n) complexity (where n is really big) divides the issue into little chunks for each invocation, then takes each of the smallest bits and stitches them back together (n log n). Executing a given operation on N items takes N*log(N) steps.

Code:

using System;

namespace DSAComplexity {

class program {

static public int Partition(int[] arr, int lt, int rt) {

int pivot;

pivot = arr[lt];

while (true) {

while (arr[lt] < pivot) {

lt++;

}

while (arr[rt] > pivot) {

rt--;

}

if (lt < rt)

{

int temp = arr[rt];

arr[rt] = arr[lt];

arr[lt] = temp;

}

else

{

return rt;

}

}

}

static public void quickSort(int[] arr, int lt, int rt) {

int pivot;

if (lt < rt)

{

pivot = Partition(arr, lt, rt);

if (pivot > 1)

{

quickSort(arr, lt, pivot - 1);

}

if (pivot + 1 < rt)

{

quickSort(arr, pivot + 1, rt);

}

}

}

static void Main(string[] args) {

int[] arr = {57, 2, 85, 46, 75, 1, 90, 13, 50, 9};

int n = 10, i;

Console.WriteLine("Quick Sort");

Console.Write("Initial array is: ");

for (i = 0; i < n; i++)

{

Console.Write(arr[i] + " ");

}

quickSort(arr, 0, 9);

Console.Write("\nSorted Array is: ");

for (i = 0; i < n; i++)

{

Console.Write(arr[i] + " ");

}

}

}

}

And that was the implementation of O(n log n) Time Complexity. Next, you will look at the Quadratic Time Complexity.

## Quadratic Time Complexity - O(n2)

A method for input size n (where n is very large) executes almost double (n2) the steps, and the algorithm's time changes. The complexity of the method is stated to rise quadratically as the input size increases (n2)

For a particular operation, it takes on the order of N2 steps, where N is the size of the input data.

When the number of steps is proportional to the amount of the input data, we get quadratic complexity.

Code:

using System;

namespace DSAComplexity

{

class program

{

public static void Main()

{

for(int i = 0; i < 4; i++)

for(int j = 1; j < i; j++)

Console.WriteLine("Subscribe to Simplilearn!!!");

}

}

}

With this, you have a good grip on the technical aspects of Data structures and Algorithm Complexity.

## Next Steps

The next lesson in your C# training can be "Conditionals in C#." Conditional statements in C# are used when you wish to perform a certain action based on a given condition. Decision-making statements necessitate a few conditions that the program may assess and a set of statements that can be performed if the condition evaluates as true or another statement that can be run if the condition evaluates as false.

Simplilearn is the world's most popular online Bootcamp for developing skills in the digital economy, and it's here to assist you in doing just that. Digital marketing and data science are just two of the topics we cover extensively in our online courses.

Simplilearn teamed with the Caltech CTME and the Indian Institute of Technology, Kanpur to deliver their Software Development courses. These courses cover the fundamentals of data structures and algorithms in addition to more advanced subjects such as Competitive Programming. As a software engineer, you will learn about data structures such as trees, graphs, and queues.

The comments section below is open for questions and comments about this 'Data Structure and Algorithm Complexity' tutorial. Happy learning!