The Moat of Nvidia - Thoughts From Your Humble Curators

There are many tech conferences each year. But none impressed us as much as GTC 2017. We curated 4 pieces about the conference, but in this Editorial, we'd to explain the incredible moat of Nvidia. And, we think this moat is getting stronger.

First, by "moat", we mean competitive advantage. So what's Nvidia's moat? Some of you might quickly point out its hardware platforms such as its GTX, Quadro and Teslas (or Pascal or Volta) series of GPU cards, and software platform, CUDA. Beyond the obvious IP and chip design moat, there is also powerful software lock-in. Indeed, as developers, we compile code with CUDA daily. CUDA is an easy to learn extension of C and is quick to produce results. The surrounding rich software support makes it easy to get up and running, and has high switching costs, once enough efforts has been spent on top of it.

But increasingly, Nvidia is branching out into new areas of computing, creating new moats. It just tripled its data center business in a yoy basis. It has to do with the fact that they own both the hardware/software platform. And deep learning is not going anywhere soon.

Now, this moat is further strengthening in GTC 2017. Why? First, it announced that it is going to train 100k developers just this year, creating more potential customers steeped in their wares. This is a smart move - behaviors are hard to change. Secondly, they announced a new cloud platform initiative (curated under "Nvidia GPU Cloud"), which makes it easier for newcomers to start building on Nvidia's platform. Now, it remains to be seen what the competitive dynamics would be with other large cloud platforms like Google, Amazon, and Microsoft which are also Nvidia's customers. Nvidia might just see its own platform more as an educational platform and not necessarily a major revenue contributor like AWS long-term.

Currently, there are two potential competitors of Nvidia, one is AMD, but AMD is still struggling to come up with a new GPU to compete. Then there is a ASIC-platform, but most of them are still under development (Intel's Nervanna) or proprietary (Google's TPU). So virtually Nvidia is monopolizing the deep learning computing platform.

In this issue, we further analyze on Nvidia's training plan, the new V100, new partners on Drive PX and its Cloud move. We also cover Medical Imagenet and other news.

As always, if you like our letters, please subscribe and forward it to your colleagues!

Edit at 20170514: Peter Morgan is kind enough to correct us - both Nervanna and TPU are based on ASIC, rather than FPGA. We have corrected the web version since.

Artificial Intelligence and Deep Learning Weekly


Blog Posts

Open Source


About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 19,000+ members and host a weekly "office hour" on YouTube.

Share on Twitter | Share on Linkedin

Artificial Intelligence and Deep Learning Weekly