Last week, untapt hosted the inaugural session of a study group bent on mastering Deep Learning.
It proved far more popular than we anticipated, first necessitating splitting the session over two evenings and later capping the group size.
Thank you to Ed Donner, untapt’s neural network-obsessed CEO (pictured above), who offered up our office space to the group, and who kindly provided nourishment and refreshments for hungry minds.
In addition, many thanks to the fifty-five Deep Learners who attended and contributed on both Wednesday and Thursday evening. I learned a lot from the in-depth discussions around the textbook exercises that we worked through, and am excited for our next session.
As detailed in our GitHub repository, we covered the theory of:
- sigmoid neurons
- the general structure and terminology of neural networks
- (stochastic) gradient descent
- converting ten-digit one-hot representations to binary
Using Yann LeCun’s classic MNIST data set, we also worked through straightforward tutorials to classify digits with varying degrees of accuracy:
- networks with only an input and output layer (up to 83% accuracy)
- TensorFlow regression (92% accuracy)
- a multilayer (“deep”) convolutional network (99% accuracy)
From a theoretical perspective, we aim to eventually complete Nielsen’s neural network textbook and Goodfellow et al.’s forthcoming Deep Learning tome, filling in with the requisite mathematical, statistical and computer science foundational concepts where necessary. Simultaneously, we’ll be combining our broad mix of technical backgrounds to develop novel deep networks.
At present, we’d like to keep the study group size intimate and conversational. We are, however, ideating on future, alternative formats that could accommodate larger numbers. Please email me (firstname.lastname@example.org) if you’re interested in the latter.