Skip to content

Commit 87ff424

Browse files
committed
Polish the story in the third notebook
1 parent c750d61 commit 87ff424

File tree

3 files changed

+15
-7
lines changed

3 files changed

+15
-7
lines changed

notebooks/2_Neural_Network-based_Image_Classifier-1.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -302,7 +302,7 @@
302302
"editable": true
303303
},
304304
"source": [
305-
"Using the magic of blackbox optimisation algorithms provided by TensorFlow, we can define a single step of the stochastic gradient descent optimiser to improve our parameters for our score function and reduce the loss."
305+
"Using the magic of blackbox optimisation algorithms provided by TensorFlow, we can define a single step of the stochastic gradient descent optimiser (to improve our parameters for our score function and reduce the loss) in one line of code."
306306
]
307307
},
308308
{

notebooks/3_Neural_Network-based_Image_Classifier-2.ipynb

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,6 @@
1717
"editable": true
1818
},
1919
"source": [
20-
"## Getting a feel for the data\n",
21-
"\n",
2220
"Let's start by importing some packages we need."
2321
]
2422
},
@@ -52,7 +50,13 @@
5250
"editable": true
5351
},
5452
"source": [
55-
"MNIST is a dataset that contains 70,000 labelled images of handwritten digits. We're going to train a linear classifier on a part of this data set, and test it against another portion of the data set to see how well we did.\n",
53+
"## Getting a feel for the data\n",
54+
"\n",
55+
"MNIST is a dataset that contains 70,000 labelled images of handwritten digits that look like the following.\n",
56+
"\n",
57+
"![MNIST Data Sample](images/mnist-sample.png \"MNIST Data Sample\")\n",
58+
"\n",
59+
"We're going to train a linear classifier on a part of this data set, and test it against another portion of the data set to see how well we did.\n",
5660
"\n",
5761
"The TensorFlow tutorial comes with a handy loader for this dataset."
5862
]
@@ -236,7 +240,9 @@
236240
"editable": true
237241
},
238242
"source": [
239-
"We define a nonlinear model for the score function (a vanilla neural network) after introducing two sets of parameters, **W1**, **b1** and **W2**, **b2**."
243+
"We define a nonlinear model for the score function (a vanilla neural network) after introducing two sets of parameters, **W1**, **b1** and **W2**, **b2**.\n",
244+
"\n",
245+
"![Neural network 1 hidden layer](images/neural-network-1-hidden.png \"Neural network with 1 hidden layer\")"
240246
]
241247
},
242248
{
@@ -275,7 +281,7 @@
275281
"\n",
276282
"````\n",
277283
"\n",
278-
"We define our loss function to measure how poorly this model performs on images with known labels. We think of the scores we have as unnormalized log probabilities of the classes, and take the cross entropy loss of the softmax of the class scores determined by our score function."
284+
"We define our loss function to measure how poorly this model performs on images with known labels. We use the a specific form called the [cross entropy loss](https://jamesmccaffrey.wordpress.com/2013/11/05/why-you-should-use-cross-entropy-error-instead-of-classification-error-or-mean-squared-error-for-neural-network-classifier-training/)."
279285
]
280286
},
281287
{
@@ -298,7 +304,7 @@
298304
"editable": true
299305
},
300306
"source": [
301-
"Using the magic of blackbox optimisation algorithms provided by TensorFlow, we can define a single step of the stochastic gradient descent optimiser to improve our parameters for our score function and reduce the loss."
307+
"Using the magic of blackbox optimisation algorithms provided by TensorFlow, we can define a single step of the stochastic gradient descent optimiser (to improve our parameters for our score function and reduce the loss) in one line of code."
302308
]
303309
},
304310
{
@@ -425,6 +431,8 @@
425431
"1. Play around with the length of the hidden layer and see how the accuracy changes.\n",
426432
"\n",
427433
"2. Try extending the model to two hidden layers and see how much the accuracy increases:\n",
434+
"\n",
435+
" ![Neural network 2 hidden layer](images/neural-network-2-hidden.png \"Neural network with 2 hidden layer\")\n",
428436
" \n",
429437
" ````\n",
430438
" W1 = tf.Variable(tf.truncated_normal(shape=[784, 400], stddev=0.1))\n",
155 KB
Loading

0 commit comments

Comments
 (0)