Apple starts a new ML blog! It is surprising because Apple is known for its defense-level of secrecy. We just couldn't help to feel.... "Ah, Apple really wants to get into the A.I. bandwagon."
But the hints of secrecy is still there. For example, whereas Google is very open on core technologies such as speech recognition or machine translation, Apple chose research subjects such as GAN-based synthetic images. In that way, nothing relevant to its core IP would be disclosed.
Looking at the technical content, which is mostly on GAN and how to use it well. No surprise that Apple has its own captain of deep learning like Soumith Chintala. But who is he/she? No one knows! Once again, it is Apple's secrecy at work - all the posts are written by "Apple engineers". In that way, Apple leave its competitors no way to poach any of its very competent employees.
All-in-all, we learn that Apple is taking steps on A.I. and deep learning, but we couldn't help to wonder - In the future, would Apple's culture of secrecy works well with the current more open culture of AI/DL? Would this clash impede AI development within Apple?
Imagenet 2017 results came out last week. In terms of classification, it went from 2.94% last year to 2.25% this year. Talking about stars in last two years of Imagenet, we have to talk about teams from China. e.g. The winner this year is a team from Beijing's Momenta.ai and a postdoc researcher from U. Oxford. While we don't see the dramatic improvement back in the first few Imagenets, the improvement is still fairly impressive (*).
At first we thought it is just another ensemble type of work. Yet the Momenta.ai+Oxford team did design a new learning block called "squeeze-and-excite", they also improve the GPU memory pipeline. The team also promise to release a report later on.
It's unfortunate that this is the last Imagenet, and as we know major sites haven't participate since last year. But we are always grateful with the pioneering effort of collection of the database as it has resulted in many breakthroughs in deep learning and computer vision.
Footnote: Normally, we measure the relative improvement when working on a task. So with 0.75% absolute improvement and we start from 2.94%, you can think the improvement is ~25%.
For a long time, SDC we know are at L2 autonomy, that is so-called "hands-off", which is the driver can take their hands off the wheel, but only monitor and engage in driving if necessary. As we know, Google was more the champion in L2 - it only requires engagement for every 5000 miles. (Uber? Every Mile.) But L2 was more known to be the status quo.
Then Audi comes in and claim that it has L3 on its Cardiallac CT3, that is "eyes-off" which allows users to take their eyes off and only require to attend driving in special limited time. More surprising here is that Audi is a large auto, and it has the reputation of strong quality control.
One thing for sure, Germany' autos companies are among the first which researched on SDC. Of course, it was much more popularized as illuminaries such as Prof. Sebastian Thrun has competed in DARPA Grand Challenge. And eventually brought SDC technology to many research institutions of the States. So we shouldn't feel too surprised that Audi has also researched on SDC, and apparently it is for sale now.
CT3 costs 98k euros, and it's certainly for the die-hard fan of SDC, it will first be on-sale in selected are in Europe.
Here is an interview from Verge with DeepMind Founder David Hassabis on his views about the current separation of two fields: artificial intelligence and neuroscience. Indeed, while neuroscience class these days would still concern on how computation can emerge from human biology, AI nowadays usually focus on practical applications. Should it be that way? We share Hassabis' view - it shouldn't be. There are many directions of AI research which can be inspired by neuroscience. And modern AI techniques which process large sets of data, can also shed light on unsolved problems in neuroscience. We can see much of these fusions in both our main FB group AIDL and our satellite group Computational Neuroscience and Neurobiology.
Hassabis's paper was published in the prestigious Cell. Here is the link30509-3?_returnURL=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627317305093%3Fshowall%3Dtrue).