I think this is the most important chart in technology business.

(It’s an updated version of Ray Kurzweil’s published work, posted with permission, and updated here through 2016. Further UPDATE here, post Tesla AI Day.)

In this abstraction of Moore’s Law, Kurzweil plots computational power on a logarithmic scale, and finds a double exponential curve that holds over 100 years (a straight line would represent a geometrically compounding curve of progress).

In the modern era of accelerating change in the tech industry, it is hard to find even five-year trends with any predictive value, let alone trends that span the centuries.

Ray argues that through five paradigm shifts – such as electro-mechanical calculators and vacuum tube computers – the computational power that $1000 buys has doubled every two years. For the past 30 years, it has been doubling every year.

Each dot is the frontier of computational price performance of the day. One machine was used in the 1890 Census; one cracked the Nazi Enigma cipher in World War II; one predicted Eisenhower’s win in the 1956 Presidential election.

Each dot represents a human drama. They did not realize that they were on a predictive curve. Each dot represents an attempt to build the best computer with the tools of the day. Of course, we use these computers to make better design software and manufacturing control algorithms. And so the progress continues.

Notice that the pace of innovation is exogenous to the economy. The Great Depression and the World Wars and various recessions do not introduce a meaningful change in the long-term trajectory of Moore’s Law. Certainly, the adoption rates, revenue, profits and economic fates of the computer companies behind the various dots on the graph may go though wild oscillations, but the long-term trend emerges nevertheless.

Any one technology, such as the CMOS transistor, follows an elongated S-shaped curve of slow progress during initial development, upward progress during a rapid adoption phase, and then slower growth from market saturation over time. But a more generalized capability, such as computation, storage, or bandwidth, tends to follow a pure exponential – bridging across a variety of technologies and their cascade of S-curves.

Moore’s Law is commonly reported as a doubling of transistor density every 18 months. But this is not something the co-founder of Intel, Gordon Moore, has ever said. It is a nice blending of his two predictions; in 1965, he predicted an annual doubling of transistor counts in the most cost effective chip and revised it in 1975 to every 24 months. With a little hand waving, most reports attribute 18 months to Moore’s Law, but there is quite a bit of variability. The popular perception of Moore’s Law is that computer chips are compounding in their complexity at near constant per unit cost. This is one of the many abstractions of Moore’s Law, and it relates to the compounding of transistor density in two dimensions. Others relate to speed (the signals have less distance to travel) and computational power (speed x density).

Unless you work for a chip company and focus on fab-yield optimization, you do not care about transistor counts. Integrated circuit customers do not buy transistors. Consumers of technology purchase computational speed and data storage density. When recast in these terms, Moore’s Law is no longer a transistor-centric metric, and this abstraction allows for longer-term analysis.

What Moore observed in the belly of the early IC industry was a derivative metric, a refracted signal, from the bigger trend, the trend that begs various philosophical questions and predicts mind-bending futures.

Moore’s Law is a primary driver of disruptive innovation, such as the iPod usurping the Sony Walkman franchise , and it drives not only IT and Communications and has become the primary driver in drug discovery and bioinformatics, medical imaging and diagnostics. As Moore’s Law crosses critical thresholds, a formerly lab science of trial and error experimentation becomes a simulation science and the pace of progress accelerates dramatically, creating opportunities for new entrants in new industries.

This non-linear pace of progress has been the primary juggernaut of perpetual market disruption, spawning wave after wave of opportunities for new companies.

I just watched Transcendent Man, so I have Kurzweil on the mind.

40 responses to “Transcending Moore’s Law”

  1. It’s funny and always makes me ponder, that this curve, just like the general acceleration curve that the singularity draws to make its point… they both are congruent with the increasing possibilities of recording and storing information.

    I wonder if this congruence is because they are all part, cause and consequence, of the same event… or if this singularity curve might be biased by the very simple fact that before 1900’s the possibilities of recording and storing information, apart from books in libraries, was null, and after 1900’s, when the curves takes this acceleration, we find ourselves with several tons more of information to process to add up to the line of "significant" events -those things that make sense semantically to these curves-.

    I don’t have a definitive position on this… I didn’t finish to think about the thought… but I smell some conection between these two elements (acceleration of the technologic advancement and the increasing possibilities of recording and storing information) and their relation is a, at least, two-way road.

    (of course I am not talking about the obvious and undeniable natural impact information recording and storage on technologic advancement.)

    For whatever that means! |-)

  2. It would be interesting to see this chart juxtaposed with other data ala Hans Rosling and his Gapminder charts.

  3. I am assuming, the $1000 is adjusted for inflation?

  4. Yes, it is in constant dollars.

    Alieness: interesting, just like the major jump in bio evolution when code (DNA) became the vector of information capture and transmission… ritualized coping vs ad hoc localized transmission…. like the web vs word of mouth for memes…

    Kurzweil theorizes:
    "• Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, the

    • rate of progress of an evolutionary process increases exponentially over time. Over time, the "order" of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases.

    • A correlate of the above observation is that the "returns" of an evolutionary process (e.g., the speed, cost-effectiveness, or overall "power" of a process) increase exponentially over time.

    • In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed toward the further progress of that process. This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially).

    • Biological evolution is one such evolutionary process.

    • Technological evolution is another such evolutionary process. Indeed, the emergence of the first technology creating species resulted in the new evolutionary process of technology. Therefore, technological evolution is an outgrowth of–and a continuation of–biological evolution.

    • A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (i.e., a fundamental change in the approach) occurs, which enables exponential growth to continue."

  5. Blimey – nice writing. If you ever give up the money printing business you could be a full-time mighty wordsmith!

  6. Great quotes, thank you. They muchly help my thoughts, I hadn’t read them before.

    And also I didn’t think about the connection between this I said and the biological breakthrough with DNA encoding… Good one there indeed!!!
    :—————–////

    Food for thought… uhm.

  7. X more years until the first human to successfully expand hir long-term and working memory.

  8. I have found your notes on this chart to be very enlightening and stimulating. I could write a 20 page essay about this subject in response to your comments, but I’ll only mention a couple things.

    Just a couple days ago I read an essay by Alfred Russel Wallace with the title of: On the Tendency of Varieties to Depart Indefinitely From the original Type. It was sent from Ternate, Indonesia, to Charles Darwin in around 1858. Wallace outlined his concept of evolution by natural selection which generally paralleled Darwin’s (without picking nits as to the differences). The type of evolution described, for animals and plants, was and continues to be linear. This is in opposition to technological evolution which is as you have noted dynamically exponential, and consistently so since its beginning. It would be interesting to be alive in 100-200 years from now to witness what has happened.

    During my US Navy career I was assigned to the USS Saratoga, CVA-60, and worked in the IOIC, Integrated Operational Intelligence Center. The years’ for this duty was 1972-74. We had a UNIVAC computer (see this link for specs on the monster). This computer used magnetic core storage, was very slow and had limited computing capacity. The purpose of the machine was to create an electronic database of military targets. The database was generated from IBM cards, and by the time we departed from Vietnam, we had over 500,000 of these cards in the paper and electronic database. The original price for these CPUs was $500,000.

    While the computer system was functional, it was far from up to date. For the last 2 years of my military duty I was assigned to a command center that had a WWMCS Honeywell 6000 system. In 1971 the military ordered 35 of these systems for $46,000,000. Their computing ability was miles ahead of the UNIVAC. We worked from CRT terminals and the system had muli-processing capabilities. It was, for its time, the top of the technological evolutionary chain.

    A couple months ago I purchased a Toshiba laptop (for $1,000) that probably has the same computing capacity as the huge Honeywell 6000 with much faster speed.

    I think that these 3 machines are quite clear examples of the exponential growth curve of computing technology.

  9. Great plot, and very interesting discussion. What I find also fascinating is the tendency to compensate for any gain in processing speed by using increasingly inefficient software.
    The above plot mostly applies to scientific computing tasks where code efficiency is important. Equally, in embedded systems the advances in technology are amazing – not just the speed increase, but also the much improved energy efficiency. I just bought the Parallax Propeller microcontroller, which costs. $7.99, has 8 cpu cores and runs at 80 Mhz. It is faster than my Desktop computer 12 years ago, can do things like multichannel real time audio processing, synthesize 1600×1200 VGA signals in software, and yet it only needs about 100 milliwatts.

    However, current desktop PCs with current software don’t feel faster than 2 years, 5 years or 10 years ago. Not that long ago it was common wisdom that directly measuring time of flight of light is difficult as there are no GHz counters available that are fast enough. Now we have laptops that can execute dozens of instructions during the time it takes a ray of light to travel the 50cm from your monitor to your eyes. Yet the new generation of students think that computers are "slow", and the worst thing is, it’s understandable why.

    Why does it take many seconds to start an application or to open a text file? Half a second between clicking a menu until it opens? Why are software developers struggling to synchronise audio and video in a video player with 0.1 sec, or within 1 millisecond for professional audio, on a computer that should be able to achieve sub-microsecond timing?
    During 1 second, the computer I’m typing on can execute over 20 billion instructions, can transfer 6 billion bytes to the main memory, and read another 40 million bytes from hard disk. And yet, for some reason it needs to do all that multiple times to be able to start a text editor (a few Mb large on disk) and open a text file (a few kb large). And that’s the same on any of the major three current desktop operating systems.
    Being a computing professional I of course know about software development paradigms, and the key reasons for inefficient software: modularity, code re-use, object factories, generic code and code generators, that make it easier and quicker to develop new software products. However, the throttle that this puts on computer speed seems like a very high price to pay, and I think it’s time to rethink and develop new tools.

  10. Another thing that I read about in an article recently is predictability and dependability of computers. This is an important topic in embedded systems (i.e. when computers are controlling jet aircraft, train networks, etc.), however there is the increasingly wide-spread attitude that computers in general are not reliable, crash all the time, and are all in all fairly non-deterministic machines that you can’t trust.

    Ironically, computers and logic circuits are probably the only human invention that has true 100% determinism within their specification, meaning that the average processor can perform at GHz frequencies for decades and never make a single mistake. Any other machine may get 99.9(99..) percent within the specifications, but there is always noise, tolerances, wear and tear, etc.
    All the perceived non-determinism and unpredictable behaviour is in fact caused by the complexity of the software. The tools, languages and compilers to deal with a good part of that complexity exist and have been around for quite a while, but the shift in attitude seems to make people forget about that huge benefit they are throwing away, of truly dependable, predictable software – they seem to forget that it is possible.

    Despite the improvements in technology the quality of service seems to go down. TVs need time to "boot" now, cars may stop working but will reset when you turn the ignition key once more, phone connections may have dropouts and worse audio quality than some years ago when digital audio was first introduced. Youtube videos look far worse than TV in the 60’s, and the clips now find their way into professional TV productions as well. And it appears to be accepted by the general population. Is it that people forget what they used to get? Do they accept the benefits (choice, quantity, …) over aesthetics and quality? Or do they simply not care?

  11. The software consumer naively assumes that the vendor is motivated to provide bug free, complete software. In truth the system rewards buggy software with incomplete features. The flaws build a support relationship with the consumer where he appreciates the company more than if there were no flaws, and he is motivated to pay for maintenance, support, and the next version. Look at the success of Microsoft Word over WordStar and Word Perfect.

  12. mmh, good point. Sounds a bit like Stockholm syndrome… 🙂

  13. Nice writing. I work in the ‘business’ – appreciate the points
    there-in 🙂

  14. And of course all who are interested in these subjects should also check out singularityu.org

    The inauguration is at NASA on June 29:
    http://www.nasa.gov/centers/ames/news/releases/2009/M09-69.html

  15. very rapid growth in the era of circuits

  16. very interesting graph.

    glad to see some singularitarians in the house here.

  17. I would actually attribute this to an early but profound observation by Gordon Bell (Bell’s Law): en.wikipedia.org/wiki/Bell’s_Law_of_Computer_Classes

    He tied together the economics of computing (what we can afford) and what Bill Buxton calls the Order of Magnitude rule (you get a new product by changing something by a magnitude). Putting them together, he predicted that every doubling (Moore’s Law) leads to a new computing class (or revolution, evolution, etc.).

  18. Jurvetson, how did you get to see Transcendent Man? The fan page on Facebook says it’s not released yet. Dying to see this film since I started tracking it a while back.

  19. Friend of Ray. It’s really good.
    (Hola from Prague.)

  20. Prague, lovely, I got to go next time I am in europe…
    and Viena…
    and Brujas…
    and Budapest…

    😀

  21. To everyone: If you haven’t already, read Ray Kurzweil’s "The Singularity is Near". Great book.

  22. Isn’t a relay an electro-mechanical device as well? It’s electrical, and it has mechanical movement…

  23. A relay is definitely an electro-mechanical device. In the picture above it shows them being developed for computing around the late 1930s. IBM used them in their punched card machines, including the last (I think) key punch machine, the 129, which had integrated circuit boards for the "front-end" stuff but still used the relays to control the printing and punching of the cards.

    When I learned to do repair and maintenance on IBM punched card machines in the early 1970s, IBM used a 2-relay circuit system to control almost everything in the machines (the 029, 088, 407, etc.). These machines were used into the 1970s, until the punched card went out of use.

    Relays are still used extensively today. All starters in gas or diesel powered vehicles have a solenoid, or starter relay, that controls the operation of the starter.

  24. Future robots will be as cute as a teddy bear
    @epsos

  25. epSos: they will try…

    Zeno - Robotic Friend Cuddle-bot Daydreaming

    mannyabraham.. yes…

    Symbolic Immortality

  26. i want my iphone to be 100 times more powerful

  27. whoa, steve.. over 86,000 views!! …have never seen a flickr photo with as many!

    on another note, so much we owe to doug engelbart and his early influence:
    query.nytimes.com/gst/fullpage.html?res=9E05EFD91E3EF93BA…

  28. peculiar patterns in popularity…. These got over 300K views.

    Strange Creatures Digging It

  29. In the image I recognize the Hollerith card from ca. 1890 (used by the Nazis to keep track of their victims) but the electromechanical relay threw me. After the integrated circuit there ought to be an image of an artificial Mind.

  30. OK…

    Brainstorm

    …and they are Moore’s Law dependent. My friend uses NVIDIA cell processors to dramatically accelerate cortical column simulations.

    P.S. Kevin Kelly has a nice post on Moore’s Law++

  31. this is an interesting concept. However, I feel that it is bioexogenous. And, the fact is, that "we" live in a biosphere. If we think that unending material and technological growth is a "given", then "we" are in trouble. Think about what this way of thinking has done to damage the climate. I like an interesting technological innovation as much as any one else, but often think that "Soft technologies" might have been a better evolutionary path. (For simplicity sake take the example of Fast food and the now-called Slow foods. Most sensible people see that slow foods are a better way Now this simplistic analogy might be applied to other areas I suggest !) Or maybe it is not such a simplistic idea. The third law of thermodynamics might be a good tool to interpret this evolution.

  32. I do! Interesting…

    Romulo Zanini Hammer

  33. interesting to see how far this has gone MainStream now: http://www.livestream.com/facebookannouncements/video?clipId=pla... #LawOfSharing

  34. It seems that singularity idea together with noosphera has emerged in the works of Pierre Teilhard de Chardin and Vladimir Vernadsky. Thus it is 70-80 years old at least. I wonder if there was anything prior to this.
    Another interesting thing is the graph and the curve itself can be understood better if you look at it in 3d as a spiral – thus each new mini-paradigm would be parallel to the old on another wave of the spiral.
    Also, Kurzweil thinks that singularity will arrive built on the three pillars of existing technologies and when humans and machines (artificial intelligence will merge)… but all this together will probably mean creating the right background for something we cannot even imagine now and thus the word “machine” could outdated in this regard… thus it will be the end of the technological progress as we understand it today and beginning of singularity itself… the leap to infinity in the full flourishing bloom of noosphera. Human beings have the potential to be infinite.

  35. P.S. The baton is now in NVIDIA’s hands. Consider the GTX Titan X. 11 TFLOPS for $1,200. That would be on the order of 10^13 for 2016, perfectly on the line.

  36. I was updating the Kurzweil Curve (the meaningful version of Moore’s Law) to include the latest data points, and found that he was doing the same thing. Here is the preliminary version. The 7 most recent data points are all NVIDIA, with CPU architectures dominating the prior 30 years:

Leave a Reply

Your email address will not be published. Required fields are marked *