Skip to content

Commit 8594cb3

Browse files
authored
Update dynamic_programming.md
1 parent 026fcf6 commit 8594cb3

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

notes/dynamic_programming.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ DP works best for problems that have two key features. The first is **optimal su
66

77
This method was introduced by Richard Bellman in the 1950s and has become a valuable tool in areas like computer science, economics, and operations research. It has been used to solve problems that would otherwise take too long by turning slow, exponential-time algorithms into much faster polynomial-time solutions. DP is practical and powerful for tackling real-world optimization challenges.
88

9-
### Fundamental Principles of Dynamic Programming
9+
### Principles
1010

11-
To effectively apply dynamic programming, a problem must satisfy two key properties:
11+
To effectively apply dynamic programming, a problem must satisfy two properties:
1212

1313
#### 1. Optimal Substructure
1414

@@ -40,7 +40,7 @@ Let $S(n)$ be the set of subproblems for problem size $n$. If there exists $s \i
4040

4141
The recursive computation of Fibonacci numbers $F(n) = F(n - 1) + F(n - 2)$ involves recalculating the same Fibonacci numbers multiple times. For instance, to compute $F(5)$, we need to compute $F(4)$ and $F(3)$, both of which require computing $F(2)$ and $F(1)$ multiple times.
4242

43-
### Dynamic Programming Techniques
43+
### Techniques
4444

4545
There are two primary methods for implementing dynamic programming algorithms:
4646

@@ -160,9 +160,9 @@ We fill the table $dp[0..n][0..W]$ iteratively based on the state transition.
160160
- The **time complexity** is $O(nW)$, where $n$ is the number of items and $W$ is the capacity of the knapsack, as the algorithm iterates through both items and weights.
161161
- The **space complexity** is $O(nW)$, but this can be optimized to $O(W)$ because each row in the table depends only on the values from the previous row, allowing for space reduction.
162162

163-
## Advanced Dynamic Programming Concepts
163+
### Advanced Concepts
164164

165-
### Memory Optimization
165+
#### Memory Optimization
166166

167167
In some cases, we can optimize space complexity by noticing dependencies between states.
168168

@@ -175,7 +175,7 @@ for i in range(1, n + 1):
175175
dp[w] = max(dp[w], dp[w - w_i] + v_i)
176176
```
177177

178-
### Dealing with Non-Overlapping Subproblems
178+
#### Dealing with Non-Overlapping Subproblems
179179

180180
If a problem has **optimal substructure** but does not have **overlapping subproblems**, it is often better to use **Divide and Conquer** instead of Dynamic Programming. Divide and Conquer works by breaking the problem into independent subproblems, solving each one separately, and then combining their solutions. Since there are no repeated subproblems to reuse, storing intermediate results (as in Dynamic Programming) is unnecessary, making Divide and Conquer a more suitable and efficient choice in such cases.
181181

0 commit comments

Comments
 (0)