Big O Notation is the language of algorithm efficiency. It describes the upper bound of an algorithm's growth rate (time or space) as the input size (n) increases.
Big O Notation is a way to describe how fast or slow a program runs when the input becomes big.
"Think of it as a way to measure the speed and efficiency of your code."
๐ Why Use Big O?
-
To compare the efficiency of algorithms.
-
To understand how your code behaves as input size grows.
-
To identify and optimize bottlenecks.
-
Independent of hardware, compiler, or language.
๐ Real-Life Example:
Imagine you're looking for a name in:
-
A diary with 10 names – it’s quick.
-
A book with 10,000 names – it takes more time.
Big O helps you predict how much time it will take as the list grows.
๐ก Big O = Growth Rate
It tells you:
"If I give your code more data (more input), how much more time or memory will it need?"
๐ง Core Idea
Big O answers:
"How does the number of operations grow as input grows?"
It abstracts away constants and low-level details, so we focus only on growth trends.
๐ Key Concepts of Big O
Concept | Meaning |
---|---|
Upper Bound | Big O shows the worst-case scenario. |
Asymptotic Analysis | Behavior as n → ∞ (infinite input). |
Ignore Constants | O(2n) → O(n) , O(1000n + 300) → O(n) |
Dominant Term | In O(n² + n) , n² dominates → so O(n²) |
Scalability | Shows how well an algorithm scales with input size |
๐ Types of Notations in Asymptotic Analysis
Notation | Purpose | Description |
---|---|---|
O | Big O | Upper bound (Worst-case) |
ฮฉ | Omega | Lower bound (Best-case) |
ฮ | Theta | Tight bound (Average-case) |
o | Little o | Strict upper bound (not equal) |
ฯ | Little omega | Strict lower bound (not equal) |
๐ Common Big O Complexities (Time & Space)
Big O | Name | Description | Example |
---|---|---|---|
O(1) | Constant | Doesn’t grow with input | Accessing array by index |
O(log n) | Logarithmic | Reduces problem size each step | Binary search |
O(n) | Linear | Grows directly with input | Simple for-loop |
O(n log n) | Linearithmic | Sorting algorithms | Merge Sort, Quick Sort (avg) |
O(n²) | Quadratic | Nested loops | Bubble Sort, Insertion Sort |
O(n³) | Cubic | 3-level nested loops | Matrix multiplication (naive) |
O(2โฟ) | Exponential | Doubles every time (usually recursion) | Fibonacci (naive recursive) |
O(n!) | Factorial | Worst performance – permutations | Travelling Salesman Problem |
๐ Detailed Python Examples by Big O Type
✅ O(1) – Constant Time
✅ O(n) – Linear Time
✅ O(n²) – Quadratic Time
✅ O(log n) – Logarithmic Time
✅ O(n log n) – Merge Sort
⚠️ Drop Constants & Non-Dominant Terms
Bad:
-
Total: O(n + n) = O(2n) → Drop constant → O(n)
๐ Graphical View
-
O(1): — (Flat line)
-
O(log n): Slightly rising
-
O(n): Diagonal
-
O(n log n): Curves upward
-
O(n²): Parabola
-
O(2โฟ), O(n!): Explodes exponentially
๐ฆ Big O in Data Structures
Data Structure | Access | Search | Insert | Delete | Space |
---|---|---|---|---|---|
Array (static) | O(1) | O(n) | O(n) | O(n) | O(n) |
List (dynamic) | O(n) | O(n) | O(1) | O(n) | O(n) |
Stack/Queue | O(n) | O(n) | O(1) | O(1) | O(n) |
HashMap | O(1) | O(1) | O(1) | O(1) | O(n) |
Binary Tree | O(log n) | O(log n) | O(log n) | O(log n) | O(n) |
Graph (Adj Mat) | O(1) | O(1) | O(1) | O(1) | O(n²) |
Graph (Adj List) | O(n) | O(n) | O(1) | O(1) | O(n+e) |
๐งช Measuring Big O Empirically in Python
❓ Interview Tricks
-
Drop everything except the dominant term.
-
Ignore constants.
-
Loops → O(n), Nested Loops → O(n²), etc.
-
Recursive calls → Multiply by branching factor and depth.
๐ Bonus Tips
-
Use Python’s
big-O calculator
: https://www.bigocheatsheet.com -
Learn to visualize time vs space trade-offs.
-
Practice spotting Big O from code quickly.
✅ Summary
Term | Meaning |
---|---|
Big O | Worst-case performance |
Asymptotic | Long-term behavior, as n grows |
Drop constants | Focus on dominant term only |
O(1) → O(n!) | Constant to factorial growth |
Practical use | Analyze loops, recursion, data structure ops |
No comments:
Post a Comment