Tip: If you are on a slow or old machine like me, or if you want to run many different examples to explore the design space you can speed up the calculations by removing a border from the MNIST image data. Every image has a 1-pixel white border. Removing this border reduces the number of input variables by 108 or more than 13%. In fact, you can drop even a 3-pixel border without any impact that I can notice. Dropping more is also possible, but then the expected max accuracy will also start to drop. But it is quite remarkable that even using only the innermost 8x8 image fragment one can easily get above 80% accuracy.

Gotcha: I have run the one hidden layer with 100 nodes scenario with the original test set of 10,000 examples. I did not split it into the 5,000 for validation and 5,000 for testing. I was surprised that the maximum accuracy I could achieve was only 97.8%, not the 98.6% stated in the book. However, this is purely an effect of the training set. When I did the splitting into validation and testing set with 5,000 for testing I got the 98.6% accuracy with the same network weights. This was surprising to me, that there is that big a change in accuracy due to the size of the test set.

About the tip: Thatâ€™s smart unconventional thinking. Iâ€™ll be honest: I never thought about that. I wouldnâ€™t have put that technique in the book anyway, because it might confuse matters (and result in different outcomes from the examples and end-of-chapter exercises), but if youâ€™re willing to sacrifice some % points for speeds, it might be worth it. Just out of curiosity: in the exercise where you aim for 99% accuracy, how much do you lose by removing the border.

About the gotcha: if I understand correctly, you trained over the whole MNIST (training and test sets together) and tested over all of it as well. If so, then Iâ€™m surprised by this result. If anything, Iâ€™d expect training/testing over the same exact set to give an unrealistically high %, because of overfitting. Can you please confirm that youâ€™re training and testing over the same set of 10,000 examples? Or maybe youâ€™re training over 10,000, and then testing over the 5,000 test examples only?

I havenâ€™t yet fully explored the 99% exercise with varying boundaries removed, but from the testing I have done so far removing a 3-pixel boundary from images doesnâ€™t reduce max accuracy at all. The changes are within the noise. And even removing a bigger boundary say 5 or 6 pixels only has a relatively moderate impact on max accuracy ~0.2-0.4%. There doesnâ€™t seem to be a big drop off because removing even a 10-pixel boundary, leaving only the center 8x8 pixels still achieves accuracies in the mid 80% range.

Gotcha: No, I did not train over the entire 70,000 MNIST data. I trained over the 60,000 training set and then tested over the 10,000 validation + test set. In other words, I did train exactly as in the book, but I tested over validation + test set combined. And this produces almost a 1% lower accuracy. Testing only over the 5,000 test set produces a higher accuracy. So the choice to split the original 10,000 test data into 5000 validation and 5000 test was a lucky decision, otherwise it would not be possible to reach 99% with a one hidden layer net.

About the border removal: that is indeed interesting. Today I learned! I wonder what happens if one removes random pixels from the image instead. (Keeping it consistent across images). It would be fun to check what is the breaking point where inference really starts to suffer. I expect that the border pixels are less important than the central ones (for the carefully resized and centered MNIST numbers at least), but Iâ€™d be curious to see how much information the algorithm actually needs.

About the gotcha. Aaah, OK. I see. I was expecting that a few thousands of test cases would basically level off random variations like those, but apparently that is not the case. On the other hand, 99% is a pretty arbitrary number: I checked how far I could get with one hidden layer, and used that number as a challenge. Did you counter-check by testing over the validation set only, instead of the test set? By your result, it seems that more â€śharderâ€ť images ended up in the validation set.

By the way, thank you for sharing this information. Iâ€™m enjoying reading about your experiments.

Here is another small tip regarding the loss function. In the book the loss function is implemented with

-np.sum(Y * np.log(y_hat)) / Y.shape[0]

Numerically that isnâ€™t sound because y_hat can be zero and then the log function canâ€™t compute it. What I have seen some do is add a small number to y_hat like

-np.sum(Y * np.log(y_hat+1e-8)) / Y.shape[0]

That works but introduces a small error in the reported loss. Sometimes for a well trained model the loss can get pretty small and then that small error can become noticeable. It isnâ€™t a big deal since this is only a report function but I think a better way to deal with it is with the masked array feature of numpy.

-np.sum(Y * np.ma.log(y_hat)) / Y.shape[0]

So instead of np.log simply use np.ma.log. This masks out the elements for which y_hat is zero. In most cases for a well-trained network when y_hat is zero so is Y which means the product should be zero as well. x log(x) is zero for x equal to zero. The masking does the right thing in this case because for the sum it doesnâ€™t matter if zero is added or if it is ignored. The result is exactly the same.

This implementation is only wrong for y_hat=0 and Y non-zero. While this can potentially happen early on in the training with randomly initialized weights, once a network has been sufficiently trained I think this essentially never happens.

This is a very interesting corner case. For some reason, I love to read about numerical issues with implementation.

As you say, itâ€™s unlikely to happen often in practice, and I try to keep the number of digressions/sidebars to a minimumâ€¦ but Iâ€™ll take note of this and consider it for a next edition.

If you like numerical issues then I will describe a problem I chassed for 3 days. During implementing dropout regularization I encountered an issue with the implementation of softmax that cost me three days delay. In your book the implementation of softmax is fine but basic. Meaning it does not protect against over- or underflow issues with the exponentials. What some do, for example, is to subtract the maximum value first before the exponential is applied. Mathematically this is equivalent because it is simply a multiplication of a constant factor of the numerator and denominator in the softmax formula. Nothing changes. Online I even found Python code for it that was something like

e = np.exp(x - np.max(x))

The problem with this code is subtle but numerically it is stupid. What happens is the following. np.max(x) returns the maximum from the entire matrix, meaning the maximum in the entire mini-batch. But we only need the maximum for each input (image) and not across several inputs. Numerically this causes problems because in some cases it can push the argument of the exponential so far to negative values that they all underflow and all exponentials return zero. The solution for this is to implement it such that the maximum subtracted is only the row maximum not the maximum across the entire mini-batch. Something like

e = np.exp(x - np.max(x,axis=1).reshape(-1,1))

This numerical issue manifested itself in the following way. Initially, the network was training perfectly fine. It reached about the accuracy it should reach. Then the accuracy started to drop, first slowly but then very quickly, and over the course of a few epochs the entire network blew up with all weights increasing until everything was saturated. Nothing could stop it. I tried clipping the gradients and limiting the weights norms, etc. The issue was the above-mentioned bad implementation of the softmax function.

Here is another numerical improvement I found. When using ReLU in a multi-layer network the weights get on average bigger with each layer. With two or three hidden layers this isnâ€™t a big problem, but for deeper layers, it becomes an issue. Typically this is corrected with some kind of normalization layer. However, I found a simple solution for this that doesnâ€™t require any normalization strategy. Rather than a ReLU I use a shifted down ReLU:

max(-1,x)

Instead of being zero for negative values this function is -1, and it is x for anything larger than -1. It is exactly the same function just shifted down and to the left by 1. Using this as an activation function eliminates the progressively growing weights with deeper layers. No normalization is needed. On top of this, my first tests indicate that this version of ReLU works somewhat better on the MNIST data in combination with dropout regularization. I have no idea why, but it does.

@wasshuber, this idea sounds brilliant. Did you borrow it from somewhere else, or did you have it yourself? It seems to me that some of your ideas would deserve some further investigation by people with an academic bent. I love your experimental approach!

Do you have a hypothesis for why exactly a shifted down ReLU would partially eliminate the need for normalization? Might that be something that could alternately be done by picking different initialization values for the weight?

I discovered this myself by experimenting with all kinds of activation functions. It was easy to change the code from sigmoid to other activation functions and I was curious about what changes if I used different functions. I tried some really weird ones, too.

This is why I choose your path of coding it myself because then it is much easier to change the things I wanted to change. With a library, one is in a straight-jacket and one can only change what the library allows you to change.

What made me analyze it more carefully was the fact that this shifted ReLU learned better in combination with dropout. So I tried to see why and noticed that the magnitude of the weights going from layer to layer stayed about the same when with ReLU they keep growing. I donâ€™t have any good explanation for why this is better except that if there is a sort of additional bias the weights have to learn (their magnitude increases with deeper layers) then this will take longer in the learning process than if they do not have to learn this bias.

Then again, this is such a simple modification that I would be surprised if nobody has tried this before and noted the improvement. Searching online I do see shifted ReLUs being mentioned in lists of activation functions, but I have not found anything that mentions the improvement to learning they achieve and how this may be connected to the weight magnitude staying the same. We should also not forget that I only applied this to the MNIST data set. I donâ€™t know if my observations hold in general.