This is a great write-up by Colin Raffel who was a Brain Resident, and now a Google Brain Scientist. Raffel's description on his technical work is certainly interesting, but more fascinating is how Google Brain works as a research institute.
That includes the algorithm Actor Critic using Kronecker-factored Trust Region (ACKTR) and Asynchronous Advantage Actor Critic (A3C).
This is a great note on Ising model, as well as how physics and ML are related in general. I think it is an interesting read for any one who is studying probabilistic graphical models.
Here is a point by point comparison between Pytorch and Tensorflow. It does confirm our feeling that while Tensorflow is still the most prominent toolkit, Pytorch is gaining ground and its debugging capability gain more love from researchers.
Backpropagation is a deep concept. There are thousands of tutorial will teach you to "understand ANN" by using very small network, and doing differentiation explicitly. To Dr Karpathy's, a much more sophisticated thinking is seeing the data as flowing backward in the computational graphs. Theory of back-propagation is fascinating. And as Tim Viera said, it is not just simply about chain rule.
In this post, the OP describes the significance of using backprop to do audo-differentiation, which is significantly different from what we learn in calculus or symbolic differentiation. He also poses differentiation as a Langrangian problem. The blog describes several important results such as calculation of gradient of function is provably be as fast as a function. We found it very important to grok, especially if you want to learn the detail of the modern deep learning framework.
An interesting blog post which discuss being a data scientist but not having formal credential. I bet it resonates with many of our readers.