You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -17,7 +17,24 @@ A **data structure** organizes and stores data in a way that allows efficient ac
17
17
5. A **tree** resembles a family tree, starting from one ancestor (the root) and branching out into multiple descendants (nodes), each of which can have their own children. Formally, trees are hierarchical structures organized across various levels. They’re excellent for showing hierarchical relationships, such as organizing files on your computer or visualizing company structures.
18
18
6. Consider a **graph** like a network of cities connected by roads. Each city represents a node, and the roads connecting them are edges, which can either be one-way (directed) or two-way (undirected). Graphs effectively illustrate complex relationships and networks, such as social media connections, website link structures, or even mapping transportation routes.
The choice of an algorithm significantly impacts the application's performance, making the understanding of time complexity crucial.
260
+
Imagine you’re building an app and every millisecond counts—choosing the right algorithm can make it lightning-fast or tediously slow, so having a solid grasp of time complexity may pay off.
| $O(1)$ | Constant time | Running time does not depend on input size $n$. | Array indexing, hash‐table lookup |
267
+
| $O(\log n)$ | Logarithmic time | Time grows proportionally to the logarithm of $n$. | Binary search, operations on balanced BSTs |
268
+
| $O(n)$ | Linear time | Time grows linearly with $n$. | Single loop over array, scanning for max/min |
269
+
| $O(n \log n)$ | Linearithmic time | Combination of linear and logarithmic growth. | Merge sort, heap sort, FFT |
270
+
| $O(n^2)$ | Quadratic time | Time grows proportional to the square of $n$. | Bubble sort, selection sort, nested loops |
271
+
| $O(n^3)$ | Cubic time | Time grows proportional to the cube of $n$. | Naïve matrix multiplication (3 nested loops) |
272
+
| $O(2^n)$ | Exponential time | Time doubles with each additional element in the input. | Recursive Fibonacci, brute‐force subset enumeration |
273
+
| $O(n!)$ | Factorial time | Time grows factorially with $n$. | Brute‐force permutation generation, TSP brute‐force |
274
+
244
275
245
276
#### Interpreting Big O Notation
246
277
@@ -255,48 +286,88 @@ The choice of an algorithm significantly impacts the application's performance,
255
286
- Sometimes, we can create an illusion of $O(1)$ complexity by precomputing the results for all possible inputs and storing them in a lookup table (like a hash table). Then, we can solve the problem in constant time by directly retrieving the result from the table. This approach, known as memoization or caching, is limited by memory constraints and is only practical when the number of distinct inputs is small and manageable.
256
287
- Often, the lower bound complexity for a class of problems is $O(n)$ or $O(nlogn)$. This bound represents problems where you at least have to examine each element once (as in the case of $O(n)$ ) or perform a more complex operation on every input (as in $O(nlogn)$ ), like sorting. Under certain conditions or assumptions, a more efficient algorithm might be achievable.
257
288
258
-
#### When do algorithms have O(logn) or O(nlogn) complexity?
289
+
###Recognising $O(\log n)$ and $O(n \log n)$ running-times
259
290
260
-
The exact time complexity of an algorithm usually stems from how the size of the input affects the execution flow of the algorithm—particularly the loop iterations.
291
+
The growth rate of an algorithm almost always comes from **how quickly the remaining work shrinks** as the algorithm executes. Two common patterns are:
261
292
262
-
Consider four example algorithms with differing complexities:
|*Halve (or otherwise divide) the problem each step*| $n \to n/2 \to n/4 \dots$ | $\Theta(\log n)$ |
296
+
|*Do a linear amount of work, but each unit of work is itself logarithmic*| outer loop counts down one by one, inner loop halves | $\Theta(n \log n)$ |
263
297
264
-
I. **First Algorithm** $O(n)$: Here, the running time is directly proportional to the input size ($n$), as each loop iteration reduces $n$ by 1. Hence, the number of iterations equals the initial value of $n$.
298
+
Below are four miniature algorithms written in language-neutral *pseudocode* (no Python syntax), followed by the intuition behind each bound.
265
299
266
-
```python
267
-
WHILE n >0:
268
-
n = n -1
300
+
I. Linear - $\Theta(n)$
301
+
302
+
```text
303
+
procedure Linear(n)
304
+
while n > 0 do
305
+
n ← n − 1
306
+
end while
307
+
end procedure
269
308
```
270
309
271
-
II. **Second Algorithm** $O(log(n))$: In this case, the running time is proportional to the number of times the loop can iterate before $n$ reduces to 0. Each loop iteration halves the value of $n$. This equals the number of times you can halve $n$ before it becomes 0, which also corresponds to $log(n)$.
310
+
*Work left* drops by **1** each pass, so the loop executes exactly $n$ times.
272
311
273
-
```python
274
-
WHILE n >0:
275
-
n = n /2
312
+
II. Logarithmic - $\Theta(\log n)$
313
+
314
+
```text
315
+
procedure Logarithmic(n)
316
+
while n > 0 do
317
+
n ← ⌊n / 2⌋ ▹ halve the problem
318
+
end while
319
+
end procedure
276
320
```
277
321
278
-
III. **Third Algorithm** $O(nlog(n))$: Here, the outer loop iterates $n$ times, and the inner loop iterates $log(n)$ times for each outer loop iteration. Hence, the total number of iterations is $n * log(n)$.
322
+
Each pass discards half of the remaining input, so only $\lfloor\log\_2 n\rfloor + 1$ iterations are needed.
279
323
280
-
```python
281
-
m = n
282
-
WHILE m >0:
283
-
k = n
284
-
WHILE k >0:
285
-
k = k /2
286
-
m = m -1
287
-
```
324
+
*Common real examples: binary search, finding the height of a complete binary tree.*
288
325
289
-
IV. **Fourth Algorithm** $O(log^2(n))$: In this scenario, the outer loop iterates $log(n)$ times, and the inner loop also iterates $log(n)$ times for each outer loop iteration. Consequently, the total number of iterations equals $log^2(n)$.
326
+
III. Linear-logarithmic - $\Theta(n \logn)$
290
327
291
-
```python
292
-
m = n
293
-
WHILE m >0:
294
-
k = n
295
-
WHILE k >0:
296
-
k = k /2
297
-
m = m /2
328
+
```text
329
+
procedure LinearLogarithmic(n)
330
+
m ← n
331
+
while m > 0 do ▹ runs n times
332
+
k ← n
333
+
while k > 0 do ▹ runs log n times
334
+
k ← ⌊k / 2⌋
335
+
end while
336
+
m ← m − 1
337
+
end while
338
+
end procedure
339
+
```
340
+
341
+
***Outer loop:** $n$ iterations.
342
+
***Inner loop:** $\lfloor\log\_2 n\rfloor + 1$ iterations for each outer pass.
343
+
* Total work $\approx n \cdot \log n$.
344
+
345
+
Classic real-world instances: mergesort, heapsort, many divide-and-conquer algorithms, building a heap then doing $n$ delete-min operations.
346
+
347
+
IV. Squared-logarithmic - $\Theta(\log^2 n)$
348
+
349
+
```text
350
+
procedure LogSquared(n)
351
+
m ← n
352
+
while m > 0 do ▹ outer loop: log n times
353
+
k ← n
354
+
while k > 0 do ▹ inner loop: log n times
355
+
k ← ⌊k / 2⌋
356
+
end while
357
+
m ← ⌊m / 2⌋
358
+
end while
359
+
end procedure
298
360
```
299
361
362
+
Both loops cut their control variable in half, so each contributes a $\log n$ factor, giving $\log^2 n$. Such bounds appear in some advanced data-structures (e.g., range trees) where *two* independent logarithmic dimensions are traversed.
363
+
364
+
Rules of thumb:
365
+
366
+
1.**Log factors come from repeatedly shrinking a quantity by a constant factor.** Any loop of the form `while x > 1: x \gets x / c` (for constant $c > 1$) takes $\Theta(\log x)$ steps.
367
+
2.**Multiplying two independent loops multiplies their costs.** An outer loop that counts $n$ times and an inner loop that counts $\log n$ times gives $n \cdot \log n$.
368
+
3.**Divide-and-conquer often yields $n \log n$.** Splitting the problem into a constant number of sub-problems of half size and doing $\Theta(n)$ work to combine them recurs to the *Master Theorem* case $T(n) = 2\,T\bigl(n/2\bigr) + \Theta(n) = \Theta(n \log n).$
369
+
4.**Nested logarithmic loops stack.** Two independent halving loops give $\log^2 n$; three give $\log^3 n$, and so on.
370
+
300
371
### Misconceptions
301
372
302
373
* Formal proof of Big O complexity is rarely necessary in everyday programming or software engineering. However, having a fundamental understanding of theoretical complexity is important when selecting appropriate algorithms, especially when solving complex problems. It aids in understanding the trade-offs between different solutions and predicting the algorithm's performance.
0 commit comments