Skip to content

Commit 0646f3c

Browse files
authored
Update basic_concepts.md
1 parent f174527 commit 0646f3c

File tree

1 file changed

+37
-47
lines changed

1 file changed

+37
-47
lines changed

notes/basic_concepts.md

Lines changed: 37 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Data structures and algorithms are foundational concepts in computer science, playing an essential role in designing efficient software. A data structure defines how we store and organize data on a computer, while an algorithm delineates a step-by-step procedure to perform a task or solve a problem. This article introduces the fundamental aspects of data structures and algorithms, their importance, and how they are applied in computing.
44

5-
## Data Structures
5+
### Data Structures
66

77
A data structure organizes data on a computer in a manner that enables efficient access and modification. The choice of the appropriate data structure depends on the specific use case and can significantly impact the performance of an application. Here are some common data structures:
88
Sure, here are the simplified explanations with the formal terms included:
@@ -21,19 +21,19 @@ Sure, here are the simplified explanations with the formal terms included:
2121

2222
![ds](https://user-images.githubusercontent.com/37275728/185381435-9335db5b-8c9a-4e74-87dc-deac3c0356f1.png)
2323

24-
## Algorithms
24+
### Algorithms
2525

2626
Algorithms are step-by-step instructions to solve specific problems or perform tasks. They are everywhere in fields like computer science, mathematics, and engineering. To evaluate how good an algorithm is, we often look at its efficiency in terms of time complexity (how long it takes to run) and space complexity (how much memory it uses).
2727

2828
Think of an algorithm like a recipe for cooking. It consists of a series of steps to follow to achieve a specific result. Here are the key characteristics of a good algorithm:
2929

30-
* **Input:** The data that the algorithm works with. Just like ingredients in a recipe.
31-
* **Output:** The result produced by the algorithm, similar to the dish you end up with after following a recipe.
32-
* **Definiteness:** Every step in the algorithm should be clear and precisely defined, so there’s no confusion about what to do next.
33-
* **Finiteness:** The algorithm must eventually stop after a certain number of steps, just like a recipe has a clear end when the dish is ready.
34-
* **Effectiveness:** Each step of the algorithm should be simple and executable, meaning that it can actually be done and helps towards getting the final result.
30+
- The data that an algorithm works with is referred to as its **input**, much like the ingredients required to prepare a dish in a recipe.
31+
- Just as a recipe leads to a completed dish, the algorithm produces an **output**, which represents the final result after processing the input.
32+
- It is essential that every step of the algorithm is defined with clarity and precision, ensuring that there is no ambiguity in the process, a concept known as **definiteness**.
33+
- Similar to how a recipe has a defined end point when the dish is ready, an algorithm must have **finiteness**, meaning it will stop after a specific number of steps.
34+
- For an algorithm to be **effective**, each step should be simple, executable, and directly contribute toward reaching the final result, just as every action in a recipe is purposeful and achievable.
3535

36-
### Algorithms vs. Programs
36+
#### Algorithms vs. Programs
3737

3838
Understanding the difference between an algorithm and a program is essential. Here’s a simple explanation with formal terms included:
3939

@@ -94,11 +94,11 @@ print("The sum is", sum)
9494

9595
Here’s a key point to remember: algorithms are abstract steps that always terminate after a finite number of steps. In contrast, some programs can run indefinitely until an external action stops them. For example, an operating system is a program designed to run continuously in a loop until the computer is turned off.
9696

97-
### Types of Algorithms
97+
#### Types of Algorithms
9898

9999
Algorithms can be classified into various types based on the problems they solve and the strategies they use. Here are some common categories with consistent explanations and examples:
100100

101-
- **Sorting Algorithms:** These algorithms arrange data in a specific order, such as ascending or descending. Examples include bubble sort, insertion sort, selection sort, and merge sort.
101+
I. **Sorting Algorithms** arrange data in a specific order, such as ascending or descending. Examples include bubble sort, insertion sort, selection sort, and merge sort.
102102

103103
Example: Bubble Sort
104104

@@ -115,7 +115,7 @@ After 3rd Pass: [3, 2, 4, 5, 8]
115115
After 4th Pass: [2, 3, 4, 5, 8] (Sorted)
116116
```
117117

118-
* **Search Algorithms:** These algorithms are designed to find a specific item or value within a collection of data. Examples include linear search, binary search, and depth-first search.
118+
II. **Search Algorithms** are designed to find a specific item or value within a collection of data. Examples include linear search, binary search, and depth-first search.
119119

120120
Example: Binary Search
121121

@@ -138,7 +138,7 @@ New mid element: 11
138138
The remaining element is 33, which is the target.
139139
```
140140

141-
* **Graph Algorithms:** These algorithms address problems related to graphs, such as finding the shortest path between nodes or determining if a graph is connected. Examples include Dijkstra's algorithm and the Floyd-Warshall algorithm.
141+
**Graph Algorithms** address problems related to graphs, such as finding the shortest path between nodes or determining if a graph is connected. Examples include Dijkstra's algorithm and the Floyd-Warshall algorithm.
142142

143143
Example: Dijkstra's Algorithm
144144

@@ -164,7 +164,7 @@ Starting from A:
164164
- Shortest path to D: A -> B -> C -> D (4)
165165
```
166166

167-
* **String Algorithms:** These algorithms deal with problems related to strings, such as finding patterns or matching sequences. Examples include the Knuth-Morris-Pratt (KMP) algorithm and the Boyer-Moore algorithm.
167+
**String Algorithms** deal with problems related to strings, such as finding patterns or matching sequences. Examples include the Knuth-Morris-Pratt (KMP) algorithm and the Boyer-Moore algorithm.
168168

169169
Example: Boyer-Moore Algorithm
170170

@@ -180,13 +180,12 @@ Steps:
180180
Pattern matched starting at index 10 in the text.
181181
```
182182

183-
184-
## Essential Algorithms for Software Engineers
183+
#### Essential Algorithms for Software Engineers
185184

186185
* As a software engineer, mastering every algorithm isn't expected or necessary. Instead, it is more valuable to be proficient in leveraging libraries and packages that encapsulate widely-used algorithms. However, the ability to discern the most effective algorithm for a particular task based on its efficiency, the nature of the problem, and other relevant factors is crucial.
187186
* Understanding algorithms can significantly augment your problem-solving capabilities, particularly when you're beginning your programming journey. It provides a strong foundation in logical thinking, exposes you to various strategies for problem-solving, and helps you appreciate the nuances involved in choosing the most appropriate solution. After grasping the fundamentals of algorithms, the focus generally shifts towards using pre-built libraries and packages for problem-solving rather than creating algorithms from scratch.
188187

189-
## Understanding Algorithmic Complexity
188+
### Understanding Algorithmic Complexity
190189

191190
Algorithmic complexity helps us understand the computational resources (time or space) an algorithm needs as the input size increases. Here’s a breakdown of different types of complexity:
192191

@@ -196,11 +195,11 @@ Algorithmic complexity helps us understand the computational resources (time or
196195
* **Space complexity** represents the total amount of memory an algorithm needs relative to the input size. This becomes important when memory resources are limited and the algorithm's efficiency is crucial.
197196
* **Time complexity** measures the computational time an algorithm takes as the input size grows. This is the most frequently analyzed type of complexity because the speed of an algorithm often determines its usability.
198197

199-
### Analyzing Algorithm Growth Rates
198+
#### Analyzing Algorithm Growth Rates
200199

201200
Understanding how the running time or space complexity of an algorithm scales with increasing input size is pivotal in algorithm analysis. To describe this rate of growth, we employ several mathematical notations that offer insights into the algorithm's efficiency under different conditions.
202201

203-
#### Big O Notation (O-notation)
202+
##### Big O Notation (O-notation)
204203

205204
The Big O notation represents an asymptotic upper bound, indicating the worst-case scenario for an algorithm's time or space complexity. Essentially, it signifies an upper limit on the growth of a function.
206205

@@ -216,75 +215,66 @@ If $f(n) = Ω(g(n))$, this means that $f(n)$ grows at a rate that is at least as
216215

217216
For example, if an algorithm has a time complexity of $Ω(n)$, it implies that the running time is at the bare minimum proportional to the input size in the best-case scenario.
218217

219-
#### Theta Notation (Θ-notation)
218+
##### Theta Notation (Θ-notation)
220219

221220
Theta notation offers a representation of the average-case scenario for an algorithm's time or space complexity. It sets an asymptotically tight bound, implying that the function grows neither more rapidly nor slower than the bound.
222221

223222
Stating $f(n) = Θ(g(n))$ signifies that $f(n)$ grows at the same rate as $g(n)$ under average circumstances. This indicates the time or space complexity is both at most and at least a linear function of the input size.
224223

225224
Remember, these notations primarily address the growth rate as the input size becomes significantly large. While they offer a high-level comprehension of an algorithm's performance, the actual running time in practice can differ based on various factors, such as the specific input data, the hardware or environment where the algorithm is operating, and the precise way the algorithm is implemented in the code.
226225

227-
### Diving into Big O Notation Examples
226+
#### Diving into Big O Notation Examples
228227

229228
Big O notation is a practical tool for comparing the worst-case scenario of algorithm complexities. Here are examples of various complexities:
230229

231-
* **$O(1)$**: Constant time complexity. Regardless of the input size, the algorithm performs its task in a fixed amount of time. For example, retrieving an item by its index from an array or accessing a key-value pair in a hash map.
232-
233-
* **$O(log n)$**: Logarithmic time complexity. The time taken by the algorithm increases logarithmically with input size, meaning the time taken doubles for each doubling of the input size. Binary search and balanced binary tree operations are typical examples.
234-
235-
* **$O(n)$**: Linear time complexity. The running time of the algorithm scales linearly with the input size. This is characteristic of simple, single-pass processes such as iterating over an array or a linked list.
236-
237-
* **$O(n log n)$**: Log-linear time complexity. The running time grows linearly with the input size, but also logarithmically due to the operations performed. Algorithms such as QuickSort, MergeSort, and HeapSort fall in this category.
238-
239-
* **$O(n^2)$**: Quadratic time complexity. The running time grows with the square of the input size. Nested iterations often have this complexity, with Bubble Sort and Insertion Sort as notable examples.
240-
241-
* **$O(n^3)$**: Cubic time complexity. The running time scales with the cube of the input size. This is common with algorithms involving three nested loops, such as certain naive matrix multiplication algorithms.
242-
243-
* **$O(2^n)$**: Exponential time complexity. The running time doubles with each addition to the input data set. This behavior is seen in many brute-force algorithms, like generating all subsets of a set or the naive solution to the Travelling Salesman Problem.
230+
- The time complexity **$O(1)$**, known as constant time complexity, means that regardless of the input size, the algorithm performs its task in a fixed amount of time. A common example of this is retrieving an item by its index from an array or accessing a key-value pair in a hash map.
231+
- When an algorithm has **$O(log n)$** time complexity, it operates logarithmically, meaning the time taken increases logarithmically with input size. As the input size doubles, the time taken only increases marginally. Binary search and operations on balanced binary trees are typical examples.
232+
- An algorithm with **$O(n)$** time complexity exhibits linear behavior, where the running time scales directly with the input size. This is seen in simple, single-pass processes like iterating over an array or a linked list.
233+
- In cases of **$O(n log n)$** time complexity, also called log-linear complexity, the running time grows both linearly and logarithmically with the input size. Sorting algorithms such as QuickSort, MergeSort, and HeapSort are prime examples of this complexity.
234+
- With **$O(n^2)$** time complexity, the running time increases quadratically, often due to nested loops. Algorithms like Bubble Sort and Insertion Sort fall into this category.
235+
- When an algorithm has **$O(n^3)$** time complexity, its running time scales cubically with the input size. This is common in algorithms involving three nested loops, such as naive matrix multiplication.
236+
- **$O(2^n)$** represents exponential time complexity, where the running time doubles with each additional unit of input size. This is typical in brute-force algorithms like generating all subsets of a set or solving the Travelling Salesman Problem using a naive approach.
244237

245238
The graph below illustrates the growth of these different time complexities:
246239

247240
![big_o](https://user-images.githubusercontent.com/37275728/185381461-ec062561-1f55-4cf5-a3fa-d4cc0a2c06df.png)
248241

249242
The choice of an algorithm significantly impacts the application's performance, making the understanding of time complexity crucial.
250243

251-
### Interpreting Big O Notation: Key Rules
252-
253-
1. **Neglect Constant Factors**: In Big O notation, we are interested in the rate of growth, not the exact number of operations. Therefore, constant factors are typically ignored. For example, the function $5n$ is represented as $O(n)$, discarding the constant factor of 5.
254-
255-
2. **Omit Smaller Terms**: For functions with multiple terms, only the term with the fastest growth rate is considered significant. If an algorithm's running time is $n^2 + n$, its time complexity would be $O(n^2)$, as $n^2$ grows faster than $n$.
256-
257-
3. **Consider Upper Bounds**: Big O notation defines an upper limit on the growth rate of a function. If an algorithm has a time complexity of $O(n)$, it can also be described as $O(n^2)$, $O(n^3)$, and so on. But, it does not mean that an algorithm with $O(n^2)$ complexity is also $O(n)$, because Big O notation does not provide a lower bound on growth.
244+
#### Interpreting Big O Notation: Key Rules
258245

259-
4. **$n$ and $log n$ Dominate Constants**: In Big O notation, terms that grow at least as fast as $n$ or $log n$ are dominant over constant terms. For instance, in the complexity $O(n + k)$, the $n$ term is dominant, resulting in a simplified complexity of $O(n)$.
246+
- We focus on the rate of growth rather than the exact number of operations, which is why constant factors are typically ignored. For example, the function $5n$ is expressed as **$O(n)$**, neglecting the constant factor of 5.
247+
- When an algorithm has multiple terms, only the term with the fastest growth rate is considered important. For example, if the running time is $n^2 + n$, the time complexity simplifies to **$O(n^2)$**, since $n^2$ grows faster than $n$.
248+
- Big O notation describes an upper limit on the growth rate of a function, meaning that if an algorithm has a time complexity of **$O(n)$**, it can also be described as $O(n^2)$ or higher. However, an algorithm with **$O(n^2)$** complexity cannot be described as **$O(n)$**, because Big O does not imply a lower bound on growth.
249+
- Terms that grow as fast as or faster than **$n$** or **$log n$** dominate constant terms. For example, in the complexity **$O(n + k)$**, the term **$n$** dominates, simplifying the overall complexity to **$O(n)$**.
260250

261-
### Can every problem have an O(1) algorithm?
251+
#### Can every problem have an O(1) algorithm?
262252

263253
- Not every problem has an algorithm that can solve it, irrespective of the complexity. For instance, the Halting Problem is undecidable—no algorithm can accurately predict whether a given program will halt or run indefinitely on every possible input.
264254
- Sometimes, we can create an illusion of $O(1)$ complexity by precomputing the results for all possible inputs and storing them in a lookup table (like a hash table). Then, we can solve the problem in constant time by directly retrieving the result from the table. This approach, known as memoization or caching, is limited by memory constraints and is only practical when the number of distinct inputs is small and manageable.
265255
- Often, the lower bound complexity for a class of problems is $O(n)$ or $O(nlogn)$. This bound represents problems where you at least have to examine each element once (as in the case of $O(n)$) or perform a more complex operation on every input (as in $O(nlogn)$), like sorting. Under certain conditions or assumptions, a more efficient algorithm might be achievable.
266256

267-
### When do algorithms have O(logn) or O(nlogn) complexity?
257+
#### When do algorithms have O(logn) or O(nlogn) complexity?
268258

269259
The exact time complexity of an algorithm usually stems from how the size of the input affects the execution flow of the algorithm—particularly the loop iterations.
270260

271261
Consider four example algorithms with differing complexities:
272262

273-
1. **First Algorithm** ($O(n)$): Here, the running time is directly proportional to the input size ($n$), as each loop iteration reduces $n$ by 1. Hence, the number of iterations equals the initial value of $n$.
263+
I. **First Algorithm** $O(n)$: Here, the running time is directly proportional to the input size ($n$), as each loop iteration reduces $n$ by 1. Hence, the number of iterations equals the initial value of $n$.
274264

275265
```python
276266
WHILE n > 0:
277267
n = n - 1
278268
```
279269

280-
2. **Second Algorithm** ($O(log(n))$): In this case, the running time is proportional to the number of times the loop can iterate before $n$ reduces to 0. Each loop iteration halves the value of $n$. This equals the number of times you can halve $n$ before it becomes 0, which also corresponds to $log(n)$.
270+
II. **Second Algorithm** $O(log(n))$: In this case, the running time is proportional to the number of times the loop can iterate before $n$ reduces to 0. Each loop iteration halves the value of $n$. This equals the number of times you can halve $n$ before it becomes 0, which also corresponds to $log(n)$.
281271

282272
```python
283273
WHILE n > 0:
284274
n = n / 2
285275
```
286276

287-
3. **Third Algorithm** ($O(nlog(n))$): Here, the outer loop iterates $n$ times, and the inner loop iterates $log(n)$ times for each outer loop iteration. Hence, the total number of iterations is $n * log(n)$.
277+
III. **Third Algorithm** $O(nlog(n))$: Here, the outer loop iterates $n$ times, and the inner loop iterates $log(n)$ times for each outer loop iteration. Hence, the total number of iterations is $n * log(n)$.
288278

289279
```python
290280
m = n
@@ -295,7 +285,7 @@ WHILE m > 0:
295285
m = m - 1
296286
```
297287

298-
4. **Fourth Algorithm** ($O(log^2(n))$): In this scenario, the outer loop iterates $log(n)$ times, and the inner loop also iterates $log(n)$ times for each outer loop iteration. Consequently, the total number of iterations equals $log^2(n)$.
288+
IV. **Fourth Algorithm** $O(log^2(n))$: In this scenario, the outer loop iterates $log(n)$ times, and the inner loop also iterates $log(n)$ times for each outer loop iteration. Consequently, the total number of iterations equals $log^2(n)$.
299289

300290
```python
301291
m = n

0 commit comments

Comments
 (0)