As detailed in our study group’s GitHub repository, the theoretical focus of this session was backpropagation, a principal algorithm for reducing error within artificial neural networks. In particular, econometrician Katya Vasilaky led an illuminating, real-world example-filled discussion on its:
- four fundamental equations
- intuitive purpose
- relationship with the cost function and gradient descent, including key assumptions
- role in calculating layer-by-layer error within a feedforward network
- various algorithmic implementations
In addition, we reviewed our initial efforts at developing familiarity with the high-level deep learning library Keras, including playing with some of their simpler networks for classifying MNIST digits and Reuters news items.