You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: notes/backtracking.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Main idea:
11
11
1.**Base Case (Termination Condition)** is the condition under which the recursion stops. It prevents infinite recursion by providing an explicit solution for the simplest instance of the problem.
12
12
2.**Recursive Case** is the part of the function where it calls itself with a modified parameter, moving towards the base case.
13
13
14
-
### Mathematical Foundation:
14
+
### Mathematical Foundation
15
15
16
16
Recursion closely relates to mathematical induction, where a problem is solved by assuming that the solution to a smaller instance of the problem is known and building upon it.
17
17
@@ -50,7 +50,7 @@ def factorial(n):
50
50
return n * factorial(n -1) # Recursive case
51
51
```
52
52
53
-
#### Detailed Computation for $n = 5$:
53
+
#### Detailed Computation for $n = 5$
54
54
55
55
Let's trace the recursive calls for `factorial(5)`:
56
56
@@ -77,7 +77,7 @@ Now, we backtrack and compute the results:
77
77
78
78
Thus, $5! = 120$.
79
79
80
-
### Visualization with Recursion Tree:
80
+
### Visualization with Recursion Tree
81
81
82
82
Each recursive call can be visualized as a node in a tree:
83
83
@@ -113,7 +113,7 @@ Main idea:
113
113
-**Implementation** of DFS can be achieved either through recursion, which implicitly uses the call stack, or by using an explicit stack data structure to manage the nodes.
114
114
-**Applications** of DFS include tasks such as topological sorting, identifying connected components in a graph, solving puzzles like mazes, and finding paths in trees or graphs.
115
115
116
-
### Algorithm Steps:
116
+
### Algorithm Steps
117
117
118
118
-**Start at the root node** by marking it as visited to prevent revisiting it during the traversal.
119
119
-**Explore each branch** by recursively performing DFS on each unvisited neighbor, diving deeper into the graph or tree structure.
@@ -224,7 +224,7 @@ Objective:
224
224
- Ensure that no two queens attack each other.
225
225
- Find all possible arrangements that satisfy the above conditions.
226
226
227
-
#### Visual Representation:
227
+
#### Visual Representation
228
228
229
229
To better understand the problem, let's visualize it using ASCII graphics.
230
230
@@ -267,13 +267,13 @@ One of the possible solutions for placing 4 queens on a $4 \times 4$ chessboard
267
267
-`Q` represents a queen.
268
268
- Blank spaces represent empty cells.
269
269
270
-
#### Constraints:
270
+
#### Constraints
271
271
272
272
- Only one queen per row.
273
273
- Only one queen per column.
274
274
- No two queens share the same diagonal.
275
275
276
-
#### Approach Using Backtracking:
276
+
#### Approach Using Backtracking
277
277
278
278
Backtracking is an ideal algorithmic approach for solving the N-Queens problem due to its constraint satisfaction nature. The algorithm incrementally builds the solution and backtracks when a partial solution violates the constraints.
279
279
@@ -289,7 +289,7 @@ High-Level Steps:
289
289
8. When $N$ queens have been successfully placed without conflicts, record the solution.
290
290
9. Continue the process to find all possible solutions.
291
291
292
-
#### Python Implementation:
292
+
#### Python Implementation
293
293
294
294
Below is a Python implementation of the N-Queens problem using backtracking.
295
295
@@ -322,7 +322,7 @@ def solve_n_queens(N):
322
322
place_queen(0)
323
323
return solutions
324
324
325
-
# Example usage:
325
+
# Example usage
326
326
N =4
327
327
solutions = solve_n_queens(N)
328
328
print(f"Number of solutions for N={N}: {len(solutions)}")
@@ -343,7 +343,7 @@ for index, sol in enumerate(solutions):
343
343
4. If no safe column is found, backtrack to the previous row.
344
344
5. When a valid placement is found for all $N$ rows, record the solution.
The algorithm explores the solution space as a tree, where each node represents a partial solution (queens placed up to a certain row). The branches represent the possible positions for the next queen.
405
405
@@ -409,7 +409,7 @@ The algorithm explores the solution space as a tree, where each node represents
409
409
410
410
The backtracking occurs when a node has no valid branches (no safe positions in the next row), prompting the algorithm to return to the previous node and try other options.
411
411
412
-
#### Analysis:
412
+
#### Analysis
413
413
414
414
I. The **time complexity** of the N-Queens problem is $O(N!)$ as the algorithm explores permutations of queen placements across rows.
415
415
@@ -418,13 +418,13 @@ II. The **space complexity** is $O(N)$, where:
418
418
- The `board` array stores the positions of the $N$ queens.
419
419
- The recursion stack can go as deep as $N$ levels during the backtracking process.
420
420
421
-
#### Applications:
421
+
#### Applications
422
422
423
423
-**Constraint satisfaction problems** often use the N-Queens problem as a classic example to study and develop solutions for placing constraints on variable assignments.
424
424
- In **algorithm design**, the N-Queens problem helps illustrate the principles of backtracking and recursive problem-solving.
425
425
- In **artificial intelligence**, it serves as a foundational example for search algorithms and optimization techniques.
426
426
427
-
#### Potential Improvements:
427
+
#### Potential Improvements
428
428
429
429
- Implementing more efficient conflict detection methods.
430
430
- Using heuristics to choose the order of columns to try first.
@@ -434,7 +434,7 @@ II. The **space complexity** is $O(N)$, where:
434
434
435
435
Given a maze represented as a 2D grid, find a path from the starting point to the goal using backtracking. The maze consists of open paths and walls, and movement is allowed in four directions: up, down, left, and right (no diagonal moves). The goal is to determine a sequence of moves that leads from the start to the goal without crossing any walls.
436
436
437
-
#### Maze Representation:
437
+
#### Maze Representation
438
438
439
439
**Grid Cells:**
440
440
@@ -494,7 +494,7 @@ Objective:
494
494
495
495
Find a sequence of moves from `S` to `G`, navigating only through open paths (`.`) and avoiding walls (`#`). The path should be returned as a list of grid coordinates representing the steps from the start to the goal.
496
496
497
-
#### Python Implementation:
497
+
#### Python Implementation
498
498
499
499
```python
500
500
defsolve_maze(maze, start, goal):
@@ -549,7 +549,7 @@ else:
549
549
print("No path found.")
550
550
```
551
551
552
-
#### Recursive Function `explore(x, y)`:
552
+
#### Recursive Function `explore(x, y)`
553
553
554
554
I. **Base Cases:**
555
555
@@ -574,7 +574,7 @@ III. **Backtracking:**
574
574
- Unmark the cell by setting `maze[x][y] = '.'`.
575
575
- Return `False` to indicate that this path does not lead to the goal.
576
576
577
-
#### Execution Flow:
577
+
#### Execution Flow
578
578
579
579
I. **Start at `(0, 0)`**:
580
580
@@ -605,7 +605,7 @@ V. **Reaching the Goal**:
605
605
- If a path is found, it prints "Path to goal:" followed by the list of coordinates in the path.
606
606
- If no path exists, it prints "No path found."
607
607
608
-
#### Final Path Found:
608
+
#### Final Path Found
609
609
610
610
The path from start to goal:
611
611
@@ -615,7 +615,7 @@ The path from start to goal:
615
615
(5, 5)]
616
616
```
617
617
618
-
#### Visual Representation of the Path:
618
+
#### Visual Representation of the Path
619
619
620
620
Let's overlay the path onto the maze for better visualization. We'll use `*` to indicate the path.
621
621
@@ -643,13 +643,13 @@ Legend:
643
643
. - Open path
644
644
```
645
645
646
-
#### Advantages of Using Backtracking for Maze Solving:
646
+
#### Advantages of Using Backtracking for Maze Solving
647
647
648
648
- Ensures that all possible paths are explored until the goal is found.
649
649
- Only the current path and visited cells are stored, reducing memory usage compared to storing all possible paths.
650
650
- Recursive implementation leads to clean and understandable code.
651
651
652
-
#### Potential Improvements:
652
+
#### Potential Improvements
653
653
654
654
- This algorithm finds a path but not necessarily the shortest path.
655
655
- To find the shortest path, algorithms like Breadth-First Search (BFS) are more suitable.
Copy file name to clipboardExpand all lines: notes/graphs.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -225,7 +225,7 @@ To efficiently keep track of the traversal, BFS employs two primary data structu
225
225
* A queue, typically named `unexplored` or `queue`, to store nodes that are pending exploration.
226
226
* A hash table or a set called `visited` to ensure that we do not revisit nodes.
227
227
228
-
#### Algorithm Steps:
228
+
#### Algorithm Steps
229
229
230
230
1. Begin from a starting vertex, $i$.
231
231
2. Mark the vertex $i$ as visited.
@@ -324,13 +324,13 @@ Dijkstra's algorithm is a cornerstone in graph theory, designed to compute the s
324
324
***Input**: A weighted graph (where each edge has a value associated with it, representing the cost or distance) and a starting vertex `A`.
325
325
***Output**: An array `distances` where `distances[v]` represents the shortest path from `A` to vertex `v`.
326
326
327
-
#### Containers and Data Structures:
327
+
#### Containers and Data Structures
328
328
329
329
* An array `distances`, initialized to `∞` for all vertices except the starting vertex which is initialized to `0`.
330
330
* A hash table `finished` to keep track of vertices for which the shortest path has been determined.
331
331
* A priority queue to efficiently select the vertex with the smallest tentative distance.
332
332
333
-
#### Algorithm Steps:
333
+
#### Algorithm Steps
334
334
335
335
I. Initialize `distances[A] = 0` and `distances[v] = ∞` for all other vertices `v`.
336
336
@@ -458,17 +458,17 @@ While the basic implementation of Dijkstra's algorithm runs in `O(n^2)` time, it
458
458
459
459
The Bellman-Ford algorithm is a graph search algorithm that finds the shortest path from a single source vertex to all vertices in a weighted graph. Unlike Dijkstra's algorithm, which works only for graphs with non-negative weights, Bellman-Ford is versatile enough to handle graphs in which some of the edge weights are negative.
460
460
461
-
#### Input & Output:
461
+
#### Input & Output
462
462
463
463
***Input**: A weighted graph (where each edge has an associated cost or distance) and a starting vertex `A`.
464
464
***Output**: An array `distances` where `distances[v]` represents the shortest path from `A` to vertex `v`.
465
465
466
-
#### Containers and Data Structures:
466
+
#### Containers and Data Structures
467
467
468
468
* An array `distances`, initialized to `∞` for all vertices except the starting vertex which is initialized to `0`.
469
469
* A predecessor array, often used to reconstruct the shortest path.
470
470
471
-
#### Algorithm Steps:
471
+
#### Algorithm Steps
472
472
473
473
I. Initialize `distances[A] = 0` for the starting vertex and `distances[v] = ∞` for all other vertices.
474
474
@@ -727,18 +727,18 @@ Such a subgraph is called a minimal spanning tree.
727
727
728
728
Prim's Algorithm is a greedy algorithm used to find a minimum spanning tree (MST) for a weighted undirected graph. The goal of the algorithm is to include every vertex in the graph into a tree while minimizing the total edge weights.
729
729
730
-
#### Input & Output:
730
+
#### Input & Output
731
731
732
732
***Input**: A connected, undirected graph with weighted edges.
733
733
***Output**: A minimum spanning tree, which is a subset of the edges that connects all the vertices together without any cycles and with the minimum possible total edge weight.
734
734
735
-
#### Containers and Data Structures:
735
+
#### Containers and Data Structures
736
736
737
737
* An array `key[]` to store weights. Initially, `key[v] = ∞` for all `v` except the first vertex.
738
738
* A boolean array `mstSet[]` to keep track of vertices included in MST. Initially, all values are `false`.
739
739
* An array `parent[]` to store the MST.
740
740
741
-
#### Algorithm Steps:
741
+
#### Algorithm Steps
742
742
743
743
I. Start with an arbitrary node as the initial MST node.
744
744
@@ -835,17 +835,17 @@ The edges selected by Prim's algorithm in this case are: A-B, B-D, D-E, and A-C,
835
835
### Kruskal's Algorithm
836
836
Kruskal's Algorithm is another method to find the minimum spanning tree (MST) of a connected, undirected graph with weighted edges. It works by sorting all the edges from the lowest to highest weight, and then picking edges one by one, ensuring that the inclusion of each edge doesn't form a cycle.
837
837
838
-
#### Input & Output:
838
+
#### Input & Output
839
839
840
840
***Input**: A connected, undirected graph with weighted edges.
841
841
***Output**: A minimum spanning tree composed of a subset of the edges.
842
842
843
-
#### Containers and Data Structures:
843
+
#### Containers and Data Structures
844
844
845
845
* A list or priority queue to sort all the edges based on their weights.
846
846
* A disjoint-set (or union-find) structure to help in cycle detection and prevention.
847
847
848
-
#### Algorithm Steps:
848
+
#### Algorithm Steps
849
849
850
850
I. Sort all the edges in increasing order based on their weights.
0 commit comments