Time Complexity
When we write programs, some solutions run faster than others.
Time complexity is a way to measure how fast an algorithm grows as the input gets larger.
You don’t measure it in seconds. You measure it in how the number of steps increases when the input size increases.
Time complexity helps you compare algorithms without actually running them.
Let’s say we have this code:
for (int i = 0; i < n; i++)
cout << i << " ";If \(n = 10\), it prints \(10\) numbers.
If \(n = 1000\), it prints \(1000\) numbers.
If \(n\) doubles, the number of steps also doubles.
We say this code runs in \(O(n)\) time, because it has \(n\) steps/operations it does.
That means the time grows linearly with the size of the input.
What \(O(\ldots)\) means
Big \(O\) notation (read as “big oh”) shows how the number of steps grows as \(n\) grows.
We only care about the main growth, not constants or small details.
| Name | Example | Description |
|---|---|---|
| \(O(1)\) | x = a + b; |
Constant time – runs instantly, no matter how big n is |
| \(O(\log n)\) | set::insert |
Each step cuts the problem in half |
| \(O(n)\) | Loop over array | Time grows linearly |
| \(O(n \log n)\) | Sorting | Slightly slower than \(O(n)\), but much faster than \(O(n^2)\) |
| \(O(n^2)\) | Nested loops | Time grows very fast as n increases |
| \(O(2^n)\) | Trying all combinations | Exponential – gets huge very quickly |
The smaller the complexity, the faster it scales.
Constant time – \(O(1)\)
Increase the Number
You are given a number \(x\). Increase \(x\) by \(1\), and then print it.
Solution
#include <iostream>
using namespace std;
int main(){
int x;
cin >> x;
x += 1;
cout << x << '\n';
}Here, we did three things: - read an integer from the input - increase an integer by one - output an integer
These are all simple operations. However, we wouldn’t say that this is \(O(3)\) time complexity. We do not care about constants; we just care about the main value. So this here is \(O(1)\) time complexity.
Linear time – \(O(n)\)
Money Sum
You are given \(n\) bills worth \(arr[0]\) riyals, \(arr[1]\) riyals, …, \(arr[n-1]\) riyals. How many riyals do you have in total?
Solution
#include <iostream>
using namespace std;
int main(){
int n;
cin >> n;
int arr[n];
for (int i = 0; i < n; i++)
cin >> arr[i];
long long sum = 0;
for (int i = 0; i < n; i++)
sum += arr[i];
cout << sum << '\n';
}The loop runs once per element.
If the array doubles in size, time doubles too.
Notice that we also create an array and input the elements; this takes two operations per element. Also, we input the number of elements, we set the \(sum\) variable to zero, and at the end we print the variable sum. In total we have \(1 + n + n + 1 + n + 1 = 3n + 3\). However, we said that we don’t care about constants, so this is simply just \(O(3n+3) = O(n)\) - a linear time algorithm.
Quadratic time – \(O(n^2)\)
Copying
You are given \(n\) bills worth \(arr[0]\) riyals, \(arr[1]\) riyals, …, \(arr[n-1]\) riyals. For \(n\) days, you increase the value of each bill by one. So, after first day you have bills worth \(arr[0]+1\) riyals, \(arr[1]+1\) riyals, …, \(arr[n-1]+1\) riyals. After second day you have bills worth \(arr[0]+2\) riyals, \(arr[1]+2\) riyals, …, \(arr[n-1]+2\) riyals. After \(n\) days you will have bills worth \(arr[0]+n\) riyals, \(arr[1]+n\) riyals, …, \(arr[n-1]+n\) riyals.
You should output the values of all bills for each of the \(n\) days.
#include <iostream>
using namespace std;
int main(){
int n;
cin >> n;
int arr[n];
for (int i = 0; i < n; i++)
cin >> arr[i];
for (int days = 1; days <= n; days++){
for (int i = 0; i < n; i++){
cout << arr[i] + days << ' ';
}
cout << '\n';
}
}There are \(n\) days, and \(n\) bills in each day that we should increase. In total, we do \(O(n^2)\) operations (again ignoring constants).
If \(n = 10\), it does 100 steps. If \(n = 1000\), it does a million steps.
That’s why \(O(n^2)\) algorithms can get very slow fast.
Comparing growth rates
Let’s see how fast they grow as n increases:
| \(n\) | \(O(1)\) | \(O(\log n)\) | \(O(n)\) | \(O(n^2)\) |
|---|---|---|---|---|
| 10 | 1 | 3 | 10 | 100 |
| 100 | 1 | 7 | 100 | 10,000 |
| 1000 | 1 | 10 | 1000 | 1,000,000 |
Even though the differences seem small for small \(n\), they explode for big \(n\).
A quick rule of thumb
| \(n\) | Safe maximum complexity |
|---|---|
| \(\leq 10\) | \(O(n!)\) |
| \(\leq 20\) | \(O(2^n)\) usually fine |
| \(\leq 1000\) | \(O(n^2)\) usually fine |
| \(\leq 10^5\) | \(O(n \log n)\) or better |
| \(\leq 10^7\) | \(O(n)\) or better |
| \(\geq 10^9\) | Only \(O(1)\) or \(O(\log n)\) |
This helps you guess what kind of solution is acceptable for a given input size.
Optimizing
Sometimes it is not obvious, but you can make your program faster just by thinking about more clever solutions.
Simple Sum
You are given a number \(n\). Output the sum \(-n + (-n+1) + (-n+2) + \ldots + -1 + 0 + 1 + \ldots + (n-2) + (n-1) + n\).
Solution
A naive solution would be the following:
#include <iostream>
using namespace std;
int main(){
int n;
cin >> n;
long long sum = 0;
for (int i = -n; i <= n; i++)
sum += i;
cout << sum << '\n';
}This solution is linear in time \(O(n)\).
Solution
Notice that we can do much better. Each integer appears once with a positive sign and once with a negative sign - \(+n\) and \(-n\). This means that in the end, all numbers will cancel out and we will be left only with zero. This leads us to a much simpler solution:
#include <iostream>
using namespace std;
int main(){
int n;
cin >> n;
cout << 0 << '\n';
}This solution is \(O(1)\), which is constant time and much faster than the naive solution. Always try to think about the fastest possible solution.
Space complexity
Sometimes we also care about how much memory the program uses.
That’s called space complexity.
The idea is the same: how memory usage grows as input size grows.
Example:
int arr[n]; // O(n) space
int x; // O(1) space