For his role as the leader of Nvidia and its impact of artificial intelligence and deep learning. Also see the fortune.com's write up.
Andrej Karpathy wrote a new piece on why neural network is actually the new software. Some question whether we really reach the point where DNN can just replace programming. That's a valid question, yet if you read the article closely, Karpathy was really arguing for using neural network as a skill to build several ML components such as ASR, CV and translation, which traditionally would require huge amount of programming effort, but now it can be significantly reduced by deep learning.
We think that Karpathy here is playing the role of futurist, much like some of his past articles, e.g. Short Story on AI in which he speculate how AI will look like when we scale up supervised learning.
This is a new article by Prof. Rachel Thomas who discuss the how to create a good validation. She delves deeper than the usual "train-validation-test" set type of discussion, and ask when a random validation set might not work. We found it a thought-provoking piece.
This is a project from Stanford which shows that pneumonia detection can be done by deep learning in the level of radiologists.
The model is trained on the recently released ChestX-ray14 which has 14 types of diseases annotated for each of 110k images. The architecture is a 121-layer Densenet. The authors show that CheXNet exceed the ability of human radiologists for both specificity and sensitivity. The original paper can be found here.