TL;DR: Big O notation is a mathematical framework for describing how an algorithm's time or memory requirements grow as input size increases, always focusing on the worst case.

What is Big O Notation?

Big O notation is a mathematical notation with applications in computer science that characterizes the worst-case complexity of an algorithm, that is, its memory usage, as the input size n approaches infinity.

Instead of recording precise runtimes, it records the rate of growth, which gives you a reason to think about efficiency at scale.

That said, it's important to understand what Big O is not telling you. It does not represent the run time of an algorithm on your machine. Algorithms with the same Big O classification can still behave very differently in practice. Many factors affect the real-world performance, such as:

  • Hardware
  • Compiler optimizations
  • Cache behavior
  • Even the underlying values of your data

Big O carries all that away and reveals the big picture, which becomes a lot clearer when you map the most common Big O complexities against each other.

Big O Complexity Chart

As datasets grow, algorithms behave very differently. Some remain efficient even with huge inputs, while others quickly become slow and impractical. Big O time complexity provides a method for predicting this behavior by showing how an algorithm behaves as input sizes increase.

The chart below maps out the most common Big O complexities, ranked from most to least efficient:

Big O Complexity

Growth Rate

What Happens as n Grows

Typical Example

O(1)

Constant

Runtime stays the same regardless of input size

Accessing an array element by index

O(log n)

Logarithmic

Work grows very slowly as input increases

Binary search

O(n)

Linear

Runtime increases directly with input size

Iterating through a list

O(n log n)

Linearithmic

Slightly faster growth than linear, but still efficient

Merge sort, heap sort

O(n²)

Quadratic

Work grows rapidly as input increases

Bubble sort, nested loops

O(2ⁿ)

Exponential

Runtime doubles with each additional input

Recursive Fibonacci

O(n!)

Factorial

Growth explodes extremely fast

Brute-force permutation generation

The visual gap between these complexity classes isn't just academic. An algorithm that can be implemented comfortably in the quadratic class and that runs in O (n^2) time with n = 500 would require orders of magnitude more time than an O(n log n) solution with n = 1,000,000.

Choosing the appropriate complexity class at the beginning cannot be compensated for by code-level optimizations later.

Big O Notation Examples

Let's look at some examples to see how Big O Notation simplifies functions as input size grows:

  • Example 1: Simplifying a Polynomial Function

Imagine you have a function f(x)=6x4−2x3+5

To simplify this using Big O Notation, you look at the terms and pick the one that grows the fastest as x gets bigger. In this case, 6x4 is the term that grows the quickest, while -2x3 and 5 become less important. The constant 6 in 6x4 doesn't change the growth rate, so we drop it. Thus, f(x) can be simplified to x4, and we say f(x) is O(x4).

  • Example 2: Simplifying a Product of Factors

Consider the function f(x)=3x2(2x+1)

First, expand this function to get 6x3 + 3x2. Here, the term 6x3 grows faster than 3x2 as x increases. When using Big O Notation, you ignore the constants like 6 and focus on the term with the highest growth rate. So, f(x) simplifies to x3, and we say f(x) is O(x3).

  • Example 3: Simplifying a Sum of Functions

Take the function f(x) = 4x³ + 7x² + 12

In this case, the Big O Notation is asking a single question: what part of this function is going to count when x is really large? The answer is 4x³; the other two terms drop off as x increases. The constant 4 doesn't change the shape of that growth, so we drop it too. Result: f(x) is O(x³). It's not that 7x² and 12 are mathematically wrong; they just stop being relevant when the dominant term takes over.

  • Example 4: Simplifying a Nested Loop Expression

Say an algorithm produces a runtime expression of f(n) = 3n² + 5n + 20

You would generally expect to see this in cases where a nested loop does the heavy lifting, with some additional linear or constant work on the side. At small n, each of the three terms appears to contribute. But push n to a few thousand, and 3n² is running laps around the others. Strip the constant, drop the slower terms, and you get n², so f(n) is O(n²).

How to Calculate Big O?

Computing the Big O notation for an algorithm is not so daunting as it sounds when you are familiar with what to expect. The process comes down to a few straightforward steps.

1. Count Your Operations

Go through your code and identify how many steps it performs as a function of the input size n. A single operation that runs once regardless of input is O(1). A loop that iterates through every element is O(n). A loop nested inside another loop is O(n²). The structure of your code usually tells the story pretty clearly.

2. Focus on the Worst Case

Always assume that the input will cause your algorithm to work as hard as it can. When you are going through a list, assume the item you are searching for is at the end. Best-case scenarios are always good to know; however, they don't predict how the code will actually perform in real conditions.

3. Drop the Constants

If a loop runs 4n times, that's still O(n). If something runs 200 times flat, that's still O(1). Constants get stripped out because they don't change the fundamental shape of the growth; they just shift the line up or down slightly, which doesn't matter at scale.

4. Keep Only the Dominant Term

If your analysis produces something like n² + 5n + 100, drop everything except n². As n scales up, those smaller terms shrink into insignificance compared to the leading one. Holding onto them would just add noise without adding accuracy.

"What's My Big O?" - Quick Complexity Classifier

Ask these questions about your code in order:

  • Do I need something that runs the same number of steps regardless of input? → O(1)
  • Do I need something that halves the problem at each step? → O(log n)
  • Do I need something that loops through the input once? → O(n)
  • Do I need something that sorts efficiently? → O(n log n)
  • Do I need something that uses a loop inside a loop? → O(n²)
  • Do I need something that branches into two sub-problems recursively? → O(2ⁿ)
  • Do I need something that generates every possible permutation? → O(n!)

Time Complexity vs Space Complexity

When evaluating how efficient an algorithm is, two factors consistently come up: how long it takes to run and how much memory it requires.

  • Time complexity refers to an algorithm's time to complete its execution as a function of the input size. It helps understand how an algorithm's runtime scales with different input sizes. Time complexity is typically expressed using Big O notation to describe the upper bound of the algorithm's runtime.
  • Space complexity refers to the amount of memory an algorithm uses to execute as a function of the input size. It helps understand how much memory an algorithm requires to store data and execute operations. Similar to time complexity, space complexity is also expressed using Big O notation to describe the upper bound of the algorithm's memory usage.

Also Read: Time and Space Complexity in Data Structures

Time vs Space Complexity: At a Glance

Aspect

Time Complexity

Space Complexity

Measures

How does runtime scale with input size

How memory usage scales with input size

Expressed as

Big O notation

Big O notation

Common examples

O(1), O(n), O(n²)

O(1), O(n), O(n²)

Optimization goal

Fewer operations

Less memory usage

Trade-off

Faster algorithms often use more memory

Lower memory use can mean more processing

Big O in Data Structures

Different data structures aren't just different ways of organizing data; they're different performance trade-offs. The one you pick determines how fast your code can search, insert, delete, and access elements at scale when analyzed using Big O notation in data structures.

The following is a quick cheatsheet of the most common data structures and their respective complexities in the 4 core operations:

Data Structure

Access

Search

Insertion

Deletion

Array

O(1)

O(n)

O(n)

O(n)

Dynamic Array (ArrayList)

O(1)

O(n)

O(n)

O(n)

Singly Linked List

O(n)

O(n)

O(1)

O(1)

Doubly Linked List

O(n)

O(n)

O(1)

O(1)

Stack

O(n)

O(n)

O(1)

O(1)

Queue

O(n)

O(n)

O(1)

O(1)

Hash Table

N/A

O(1) avg

O(1) avg

O(1) avg

Binary Search Tree

O(log n) avg

O(log n) avg

O(log n) avg

O(log n) avg

Balanced BST (AVL/Red-Black)

O(log n)

O(log n)

O(log n)

O(log n)

Heap

O(n)

O(n)

O(log n)

O(log n)

Some points to consider as you read this table. Hash tables are almost too good, and in average cases, they are. However, worst-case performance (where the collisions accumulate) can drop to O(n), so they are not necessarily the default solution.

The broader takeaway is that no best data structure works for all operations.

  • Arrays give you instant access, but slow insertion
  • Linked lists flip that trade-off
  • Hash tables are fast at lookup, but don't maintain order

The first step in identifying the right one for your situation is almost always to ask which of these four operations your code performs most frequently, and then to let the complexity numbers guide the decision.

Learn in-demand AI and ML skills and tools including, Generative AI, Agentic AI, Prompt Engineering, Conversational AI, and Large Language Models LLMs, with our Professional Certificate in AI and Machine Learning. Also, attend masterclasses delivered by industry thought leaders and IBM experts.

Big O for Common Algorithms

Algorithms also have complexity profiles just as data structures do. Let's look at these three categories briefly in the following cheatsheets:

  • Sorting Algorithms
  • Searching Algorithms
  • Graph Algorithms

Sorting Algorithms

Algorithm

Best Case

Average Case

Worst Case

Space

Bubble Sort

O(n)

O(n²)

O(n²)

O(1)

Selection Sort

O(n²)

O(n²)

O(n²)

O(1)

Insertion Sort

O(n)

O(n²)

O(n²)

O(1)

Merge Sort

O(n log n)

O(n log n)

O(n log n)

O(n)

Quick Sort

O(n log n)

O(n log n)

O(n²)

O(log n)

Heap Sort

O(n log n)

O(n log n)

O(n log n)

O(1)

Searching Algorithms

Algorithm

Best Case

Average Case

Worst Case

Linear Search

O(1)

O(n)

O(n)

Binary Search

O(1)

O(log n)

O(log n)

Graph Algorithms

Algorithm

Time Complexity

Space Complexity

Breadth-First Search (BFS)

O(V + E)*

O(V)

Depth-First Search (DFS)

O(V + E)

O(V)

Dijkstra’s Shortest Path

O(V²) or O(E log V)

O(V)

*V = number of vertices, E = number of edges

Big O vs Big Theta (Θ) vs Big Omega (Ω)

Aspect

Big O (O)

Big Theta (Θ)

Big Omega (Ω)

What it describes

Upper bound

Tight bound

Lower bound

What it answers

How bad can it get?

What's the exact growth rate?

How good can it get?

Scenario

Worst case

Average/exact case

Best case

Practical use

Most common in CS and interviews

Precise academic analysis

Describing minimum performance

Example (Linear Search)

O(n)

Θ(n)

Ω(1)

  • Big O (O) puts a ceiling on performance. It describes the maximum growth of an algorithm’s runtime.
  • Big Omega (Ω) puts a floor on performance. It shows the minimum time an algorithm will take.
  • Big Theta (Θ) describes the exact growth range over which the algorithm typically operates, between its upper and lower bounds.

Example: Linear Search

  • Worst case: The algorithm scans every element → O(n)
  • Best case: The element is found immediately → Ω(1)
  • Average behavior: Runtime grows proportionally with input size → Θ(n)

Key Takeaways

  • The Big O notation is straightforward: it describes the upper bound on how complex an algorithm may become as the size of the input approaches infinity
  • The various complexity classes, O(1), O(log n), O(n), O(n 2 ), etc., are not simply labels. They denote the dramatic differences in the behavior of code in the real world at scale
  • The selection of the appropriate data structure and algorithm at the beginning nearly always counts more than code optimization at a later stage
  • Time and space complexity are two sides of a coin; accelerating an algorithm can result in higher memory costs, and reducing memory can slow the runtime back down

FAQs

1. How do you write big-O?

Big-O is written using the letter O followed by parentheses around the growth rate, such as O(1), O(n), or O(log n). It describes how time or space grows as input size increases.

2. What does big-O stand for?

Big-O refers to the order of growth. It expresses how an algorithm’s running time or memory usage scales with input size, focusing on the dominant trend rather than exact numbers.

3. What's the difference between O(1) and O(n)?

O(1) means constant time, so performance stays the same as input grows. O(n) means linear time, so work increases directly with the number of input elements.

4. Big O vs worst case: are they the same?

Not always. Big-O describes an upper bound on growth, while worst case describes the most expensive scenario. Big-O is often used for worst-case analysis, but the two terms are not identical.

5. What are the common Big O complexities for arrays and hash maps?

Arrays commonly have O(1) access and O(n) search. Hash maps usually have O(1) average lookup, insert, and delete, but O(n) in the worst case due to collisions.

Our AI & Machine Learning Program Duration and Fees

AI & Machine Learning programs typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees
Microsoft AI Engineer Program

Cohort Starts: 8 Apr, 2026

6 months$2,199
Professional Certificate in AI and Machine Learning

Cohort Starts: 9 Apr, 2026

6 months$4,300
Professional Certificate Program inMachine Learning and Artificial Intelligence

Cohort Starts: 9 Apr, 2026

20 weeks$3,750
Oxford Programme inStrategic Analysis and Decision Making with AI

Cohort Starts: 17 Apr, 2026

12 weeks$3,390