After AlphaGo retired, there is a void on what AlphaGo means to the development of machine learning. I think Dr. Karpathy's article is the first analysis piece which looks at the research merit of AlphaGo. More importantly, how much AlphaGo's research can be applied to the next big problem?
Dr. K is quite blunt:
While AlphaGo does not introduce fundamental breakthroughs in AI algorithmically, and while it is still an example of narrow AI, AlphaGo does symbolize Alphabet’s AI power: in both the quantity/quality of the talent present in the company, the computational resources at their disposal, and the all in focus on AI from the very top.
I (Arthur Chan) got to agree. Before the Summit, I wondered there is any doubt AlphaGo would lose. So further development would only advance just the narrow field of computer Go. That's why I appreciate Hassabis to retire the program. There are more unsolved problems in ML - robot picking or true understanding of language. Even in games, RTS such as Starcraft II is much more interesting problem.
We post several more interesting articles on the technical side. The first is about how neural network can solve differential equations, written by Alex Honchar. He starts from explaining how to use NN to solve ODE, then move on to partial differential equations. Honchar also included TF code as an example.
This is a great article on using variational Bayes, which is widely used in both deep learning and non-deep learning problems. This article start from the tractable cases of VB method such as binomial and Gaussian, and it follows by why evidence lower bound (ELBO) is commonly used in literature. It's helpful if you want to understand the basic principle of VB.
I have been following Prof. Eugenio Culurciello on Plus and Twitter for quite some time. Mainly because his skill sets is rare even in our exotic world of deep learning. For 10 years, he was researching on implementations of neuromophic chip. He also start a deep learning class, so you can think he wields the knowledge in both VLSI design and deep learning. In this piece, he analyze the advantage of using spiking neural network vs deep learning-based approach. For example, why using neural system could reduce power consumption, how its compute are different down to the level of using transistors. It's a good piece if you want to read further the IEEE piece.
This is an interesting IEEE piece, interviewing technologists and visionaries on where we stand on AGI. There are essentially two camps of the interviewees. One group is more conventionally into predicting the future, such as Robin Hanson and Ray Kurzweil. There is another group which consists of real technologists such as Jürgen Schmidhuber. We believe the latter group's thinking is more concrete and perhaps more indicative of the future.