DSC-RX100M3
ƒ/2.8
25.7 mm
1/80
250

“We’ll soon see the power of computing increase way more than any other period. There will be trillions of products with tiny neural networks inside.”

Intel has long ago ceded leadership for Moore’s Law. And so, understandably, they have trumpeted the end of Moore’s Law for many years. To me, it sounds a lot like Larry Ellison’s OpEd declaring the end of innovation in enterprise software, just before cloud computing and SaaS took off. In both cases, the giants missed the organic innovation bubbling up all around them.

For the past seven years, it has not been Intel but NVIDIA that has pushed the frontier of Moore’s processor performance/price curve. For a 2016 data point, consider the NVIDIA Titan GTX. It offers 10^13 FLOPS per $1K (11 trillion calculations per second for $1,200 list price), and is the workhorse for deep learning and scientific supercomputing today. And they are sampling much more powerful systems that should be shipping soon. The fine-grained parallel compute architecture of a GPU maps better to the needs of deep learning than a CPU. There is a poetic beauty to the computational similarity of a processor optimized for graphics processing and the computational needs of a sensory cortex, as commonly seen in neural networks today.

Here are some of Huang’s provocative prognostications from WSJ.D Live (and some photos I took of him):

It turns out we created an AI company. We power deep learning algorithms in the car. To drive autonomously, the car needs to do perception, reasoning, planning, and learning. These are a big part of AI.

Deep Learning has taken perception to a level that is superhuman. We have eyes around the car. We are never intoxicated or angry. It can look around corners and see things you can’t see, helping you even when you are driving.

To drive autonomously, we have to predict where everything will be in near future. We do path planning and detect objects. But we need to invert the logic. When humans drive, we don’t keep checking “there is no tree in the way; there is no boat in the way” and so on. We need to train on what is safe, not on all the things to avoid.

Elon is right. The AI gets better and better over use. It needs road miles.

We want to turn the car into an AI itself. I want to talk to it and have it respond with a sultry voice. You can ask to make a call for you from calendar.

The autonomous car improves its driving over time, whereas the human capability decreases over time as we age.

The AI should get a drivers license.

Computing will increase at product of Moore’s Law and Metcalfe’s Law. We’ll soon see the power of computing increase way more than any other period.

AI takes a different process, a new software approach, new tools, a new computational architecture. Microsoft just announced Cognitive Studio / CNTK yesterday.

To enable AI, we will shift deep learning from CPUs to a new computing architecture with GPUs.

We will embed inference engines into everything: little robots, sensors, into the factory, the machines building the machines. There will be trillions of products with tiny neural networks inside.

Every search query that uses Hadoop today will move to deep learning. Every query will invoke a billion calculations when we add AI to our apps.

2 responses to “Jen-Hsun Huang, CEO of NVIDIA, carrying the torch for Moore’s Law”

  1. I am going to update the Kurzweil Curve (the meaningful version of Moore’s Law) to include the latest data points, and found that he was doing the same thing. And here it is… the 7 most recent data points are all NVIDIA, with CPU architectures dominating the prior 30 years. But…. here is an update to Moore’s Law, post Tesla AI Day122 Years of Moore's Law + Tesla AI UpdateHere is the prior version, with labels115 Years of Moore's Law, Transcending Silicon a softball question

  2. Excerpts from Huang’s 2016 post on the NVIDIA blog:

    "This is truly an extraordinary time. In my three decades in the computer industry, none has held more potential, or been more fun. The era of AI has begun.

    Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU.

    Our new Pascal GPU is a $2 billion investment and the work of several thousand engineers over three years. It is the first GPU optimized for deep learning. Pascal can train networks that are 65 times larger or faster than the Kepler GPU that Alex Krizhevsky used in his 2013 paper. [Wow. 65x in under 4 years. Moore’s Law would suggest 2^4 = 16x]

    Soon, the tens of billions of internet queries made each day will require AI, which means that each query will require billions more math operations.

    Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks. [he now says trillions]

    AI Transportation: At $10 trillion, transportation is a massive industry that AI can transform.

    Driving is a learned behavior that we do as second nature. Yet one that is impossible to program a computer to perform. Autonomous driving requires every aspect of AI — perception of the surroundings, reasoning to determine the conditions of the environment, planning the best course of action, and continuously learning to improve our understanding of the vast and diverse world.

    Xavier is 7 billion transistors — more complex than the most advanced server-class CPU. Miraculously, 20 trillion operations per second of deep learning performance — at just 20 watts.

    AI City: There will be 1 billion cameras in the world in 2020.

    AI Factory: There are 2 billion industrial robots worldwide.

    The Next Phase of Every Industry: GPU deep learning is inspiring a new wave of startups — 1,500+ around the world — in healthcare, fintech, automotive, consumer web applications and more.

Leave a Reply

Your email address will not be published. Required fields are marked *