Skip to content

Commit c1fd270

Browse files
committed
Lesson 8 and fixed the broken links
1 parent c19db22 commit c1fd270

File tree

16 files changed

+101
-45
lines changed

16 files changed

+101
-45
lines changed

etc/quiz-src/questions-en.txt

Lines changed: 57 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Lesson 1B Introduction to AI - Pre Quiz
1+
Lesson 1B Introduction to AI: Pre Quiz
22
* A famous 19th century proto-computer engineer was
33
- Charles Barkley
44
+ Charles Babbage
@@ -11,7 +11,7 @@ Lesson 1B Introduction to AI - Pre Quiz
1111
- true, they are usually considered to be 'intelligent
1212
+ false, but they are increasingly able to pass Turing tests as they become more sophisticated.
1313

14-
Lesson 1E Introduction to AI - Post-Quiz
14+
Lesson 1E Introduction to AI: Post-Quiz
1515
* A top-down approach to AI is a model of reasoning called
1616
- strategic reasoning
1717
+ symbolic reasoning
@@ -76,7 +76,7 @@ Lesson 3E Introduction to Neural Networks - Perceptron: Post-Quiz
7676
+ weights
7777
- gradient
7878

79-
Lesson 4B Neural Networks - Pre Quiz
79+
Lesson 4B Neural Networks: Pre Quiz
8080
* The quality of prediction is measured by Loss function
8181
+ True
8282
- False
@@ -89,7 +89,7 @@ Lesson 4B Neural Networks - Pre Quiz
8989
- multiple propagation
9090
- front propagation
9191

92-
Lesson 4E Neural Networks - Post Quiz
92+
Lesson 4E Neural Networks: Post Quiz
9393
* We use ____ for regression loss functions
9494
- absolute error
9595
- mean squared error
@@ -105,7 +105,7 @@ Lesson 4E Neural Networks - Post Quiz
105105
+ True
106106
- False
107107

108-
Lesson 5B Frameworks - Pre Quiz
108+
Lesson 5B Frameworks: Pre Quiz
109109
* Deep Neural Network training requires a lot of computations
110110
+ True
111111
- False
@@ -118,7 +118,7 @@ Lesson 5B Frameworks - Pre Quiz
118118
+ algorithm
119119
- computer
120120

121-
Lesson 5E Frameworks - Post Quiz
121+
Lesson 5E Frameworks: Post Quiz
122122
* After compiling our model object, we train by calling ____ function
123123
+ fit
124124
- train
@@ -133,3 +133,54 @@ Lesson 5E Frameworks - Post Quiz
133133
* Pred is the values predicted by the network
134134
+ True
135135
- False
136+
137+
Lesson 7B Convolutional Neural Networks: Pre Quiz
138+
* To extract patterns from images we use?
139+
+ convolutional filters
140+
- extractor
141+
- filters
142+
* One of these is not a CNN Architecture
143+
- ResNet
144+
- MobileNet
145+
+ Tensorflow
146+
* CNN are mostly used for computer vision tasks.
147+
+ true
148+
- false
149+
150+
Lesson 7E Convolutional Neural Networks: Post Quiz
151+
* Which pooling layer is used "scale down" the size of the image
152+
- average pooling
153+
- max pooling
154+
+ a and b
155+
* Convolutional networks generalizes much better
156+
+ True
157+
- False
158+
* To train our neural network, we need to convert images to tensors
159+
+ true
160+
- false
161+
162+
Lesson 8B Pre-trained Networks and Transfer Learning: Pre Quiz
163+
* Transfer learning approach uses untrained models for classification
164+
- true
165+
+ false
166+
* One of these is not a normalization technique?
167+
+ height normalization
168+
- weight normalization
169+
- layer normalization
170+
* We choose Stochastic Gradient Descent(SGD) in deep learning because classical gradient descent can be ____
171+
- fast
172+
+ slow
173+
174+
Lesson 8E Pre-trained Networks and Transfer Learning: Post Quiz
175+
* Dropout layers act as a ____ technique
176+
- gradient boosting
177+
- training
178+
+ regularization
179+
* freezing weights of convolutional feature extractor can be done by ____
180+
- setting `requires_grad` property to `False`
181+
- setting `trainable` property to `False`
182+
+ a and b
183+
* Batch normalization is to bring values that flow through the ____ to right interval
184+
- algorithms
185+
- batches
186+
+ neural network

lessons/2-Symbolic/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
77
The quest for artificial intelligence is based on a search for knowledge, to make sense of the world similar to how humans do. But how can you go about doing this?
88

9-
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/201)
9+
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/102)
1010

1111
In the early days of AI, the top-down approach to creating intelligent systems (discussed in the previous lesson) was popular. The idea was to extract the knowledge from people into some machine-readable form, and then use it to automatically solve problems. This approach was based on two big ideas:
1212

lessons/3-NeuralNetworks/03-Perceptron/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Introduction to Neural Networks: Perceptron
22

3-
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/301)
3+
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/103)
44

55
One of the first attempts to implement something similar to a modern neural network was done by Frank Rosenblatt from Cornell Aeronautical Laboratory in 1957. It was a hardware implementation called "Mark-1", designed to recognize primitive geometric figures, such as triangles, squares and circles.
66

@@ -76,7 +76,7 @@ In this lesson, you learned about a perceptron, which is a binary classification
7676

7777
If you'd like to try to build your own perceptron, try [this lab on Microsoft Learn](https://docs.microsoft.com/en-us/azure/machine-learning/component-reference/two-class-averaged-perceptron?WT.mc_id=academic-57639-dmitryso) which uses the [Azure ML designer](https://docs.microsoft.com/en-us/azure/machine-learning/concept-designer?WT.mc_id=academic-57639-dmitryso).
7878

79-
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/302)
79+
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/203)
8080

8181
## Review & Self Study
8282

lessons/3-NeuralNetworks/04-OwnFramework/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this section we will extend this model into a more flexible framework, allowi
1010

1111
We will also develop our own modular framework in Python that will allow us to construct different neural network architectures.
1212

13-
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/401)
13+
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/104)
1414

1515
## Formalization of Machine Learning
1616

@@ -64,15 +64,15 @@ Note that the left-most part of all those expressions is the same, and thus we c
6464
6565
## Conclusion
6666

67-
In this lesson, we have built our own neural network library, and we have used it for a simple two-dimensional classification task.
67+
In this lesson, we have built our own neural network library, and we have used it for a simple two-dimensional classification task.
6868

6969
## 🚀 Challenge
7070

71-
In the accompanying notebook, you will implement your own framework for building and training multi-layered perceptrons. You will be able to see in detail how modern neural networks operate.
71+
In the accompanying notebook, you will implement your own framework for building and training multi-layered perceptrons. You will be able to see in detail how modern neural networks operate.
7272

7373
Proceed to the [OwnFramework](OwnFramework.ipynb) notebook and work through it.
7474

75-
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/402)
75+
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/204)
7676

7777
## Review & Self Study
7878

@@ -83,4 +83,4 @@ Backpropagation is a common algorithm used in AI and ML, worth studying [in more
8383
In this lab, you are asked to use the framework you constructed in this lesson to solve MNIST handwritten digit classification.
8484

8585
* [Instructions](lab/README.md)
86-
* [Notebook](lab/MyFW_MNIST.ipynb)
86+
* [Notebook](lab/MyFW_MNIST.ipynb)

lessons/3-NeuralNetworks/05-Frameworks/IntroPyTorch.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1026,7 +1026,7 @@
10261026
"source": [
10271027
"## Training One-Layer Perceptron\n",
10281028
"\n",
1029-
"Let's use Tensorflow gradient computing machinery to train one-layer perceptron.\n",
1029+
"Let's use PyTorch gradient computing machinery to train one-layer perceptron.\n",
10301030
"\n",
10311031
"Our neural network will have 2 inputs and 1 output. The weight matrix $W$ will have size $2\\times1$, and bias vector $b$ -- $1$.\n",
10321032
"\n",

lessons/3-NeuralNetworks/05-Frameworks/Overfitting.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ It is very important to strike a correct balance between the richness of the mod
2525

2626
As you can see from the graph above, overfitting can be detected by a very low training error, and a high validation error. Normally during training we will see both training and validation errors starting to decrease, and then at some point validation error might stop decreasing and start rising. This will be a sign of overfitting, and the indicator that we should probably stop training at this point (or at least make a snapshot of the model).
2727

28-
![overfitting]("../images/Overfitting.png")
28+
![overfitting](../images/Overfitting.png)
2929

3030
## How to prevent overfitting
3131

@@ -38,10 +38,11 @@ If you can see that overfitting occurs, you can do one of the following:
3838
## Overfitting and Bias-Variance Tradeoff
3939

4040
Overfitting is actually a case of a more generic problem in statistics called [Bias-Variance Tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff). If we consider the possible sources of error in our model, we can see two types of errors:
41+
4142
* **Bias errors** are caused by our algorithm not being able to capture the relationship between training data correctly. It can result from the fact that our model is not powerful enough (**underfitting**).
4243
* **Variance errors**, which are caused by the model approximating noise in the input data instead of meaningful relationship (**overfitting**).
4344

44-
During training, bias error decreases (as our model learns to approximate the data), and variance error increases. It is important to stop training - either manually (when we detect overfitting) or automatically (by introducing regularization) - to prevent overfitting.
45+
During training, bias error decreases (as our model learns to approximate the data), and variance error increases. It is important to stop training - either manually (when we detect overfitting) or automatically (by introducing regularization) - to prevent overfitting.
4546

4647
## Conclusion
4748

@@ -51,16 +52,18 @@ In this lesson, you learned about the differences between the various APIs for t
5152

5253
In the accompanying notebooks, you will find 'tasks' at the bottom; work through the notebooks and complete the tasks.
5354

54-
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/502)
55+
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/205)
5556

5657
## Review & Self Study
5758

5859
Do some research on the following topics:
60+
5961
- TensorFlow
6062
- PyTorch
6163
- Overfitting
6264

63-
Ask yourself the following questions:
65+
Ask yourself the following questions:
66+
6467
- What is the difference between TensorFlow and PyTorch?
6568
- What is the difference between overfitting and underfitting?
6669

lessons/3-NeuralNetworks/05-Frameworks/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ As we have learned already, to be able to train neural networks efficiently we n
55
* To operate on tensors, eg. to multiply, add, and compute some functions such as sigmoid or softmax
66
* To compute gradients of all expressions, in order to perform gradient descent optimization
77

8-
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/501)
8+
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/105)
99

1010
While the `numpy` library can do the first part, we need some mechanism to compute gradients. In [our framework](../04-OwnFramework/OwnFramework.ipynb) that we have developed in the previous section we had to manually program all derivative functions inside the `backward` method, which does backpropagation. Ideally, a framework should give us the opportunity to compute gradients of *any expression* that we can define.
1111

@@ -23,7 +23,7 @@ High-level API| [Keras](https://keras.io/) | [PyTorch Lightning](https://pytorch
2323

2424
**High-level APIs** pretty much consider neural networks as a **sequence of layers**, and make constructing most of the neural networks much easier. Training the model usually requires preparing the data and then calling a `fit` function to do the job.
2525

26-
The high-level API allows you to construct typical neural networks very quickly without worrying about lots of details. At the same time, low-level API offer much more control over the training process, and thus they are used a lot in research, when you are dealing with new neural network architectures.
26+
The high-level API allows you to construct typical neural networks very quickly without worrying about lots of details. At the same time, low-level API offer much more control over the training process, and thus they are used a lot in research, when you are dealing with new neural network architectures.
2727

2828
It is also important to understand that you can use both APIs together, eg. you can develop your own network layer architecture using low-level API, and then use it inside the larger network constructed and trained with the high-level API. Or you can define a network using the high-level API as a sequence of layers, and then use your own low-level training loop to perform optimization. Both APIs use the same basic underlying concepts, and they are designed to work well together.
2929

lessons/3-NeuralNetworks/05-Frameworks/lab/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Lab Assignment from [AI for Beginners Curriculum](https://github.com/microsoft/a
44

55
## Task
66

7-
Solve two classification problems using single- and multi-layered fully-connected networks using PyTorch or TensorFlow:
7+
Solve two classification problems using single and multi-layered fully-connected networks using PyTorch or TensorFlow:
88

99
1. **[Iris classification](https://en.wikipedia.org/wiki/Iris_flower_data_set)** problem - an example of problem with tabular input data, which can be handled by classical machine learning. You goal would be to classify irises into 3 classes, based on 4 numeric parameters.
1010
1. **MNIST** handwritten digit classification problem which we have seen before.
@@ -14,4 +14,3 @@ Try different network architectures to achieve the best accuracy you can get.
1414
## Stating Notebook
1515

1616
Start the lab by opening [LabFrameworks.ipynb](LabFrameworks.ipynb)
17-

lessons/4-ComputerVision/07-ConvNets/CNN_Architectures.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,5 @@ Here is [a good blog post](https://medium.com/analytics-vidhya/talented-mr-1x1-c
4141
MobileNet is a family of models with reduced size, suitable for mobile devices. Use them if you are short in resources, and can sacrifice a little bit of accuracy. The main idea behind them is so-called **depthwise separable convolution**, which allows representing convolution filters by a composition of spatial convolutions and 1x1 convolution over depth channels. This significantly reduces the number of parameters, making the network smaller in size, and also easier to train with less data.
4242

4343
Here is [a good blog post on MobileNet](https://medium.com/analytics-vidhya/image-classification-with-mobilenet-cc6fbb2cd470).
44+
45+
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/207)

0 commit comments

Comments
 (0)