Bill Gates kicked off his TED session with one of his favorite speakers, David Christian, who takes a long-term perspective on big history, on life the universe and everything.

Clearly the gestures of communication are infectious… (from opening comments, individually above, to closing together onstage below)

David’s talk just went online:

We are the only species with collective learning.

The global connected brain I learning at warp speed, and when combined with fossil fuel energy leads to staggering complexity.

It is very powerful. I’m not sure humans are in charge.

15 responses to “Bill Gates & Big History”

  1. Nice talk. That is a difficult message to spread.

    I wonder where the happy medium between virtual learning and physical schools lies.

  2. Great talk.
    I think the comments at the video link are great reading…!!

    As an astronomy nut,I feel its too bad we don’t have access to what happened in the first 300,000 years… (!!)
    Everything prior to this is (as I understand it) is never going to studied by any technology we currently have…
    en.wikipedia.org/wiki/Cosmic_microwave_background_radiation
    As for his optimism about the internet saving the future…..(at "warp speed" )….fingers crossed.
    I will say that the progress in the field of astronomy has certainly exceeded the progress in dealing with more important "earth bound " problems…and that is a shame.

  3. =)
    That was one of the best TED talk this year, and an amazing topic, definitely !!!
    Funny, I had diner with one of my old teacher today and talked about him and Khan =)
    Cool visual memetic too =)

  4. Excellent. I’m sending the link to educator friends.

  5. see some synchronized moves in public speaking… great portraits!

    really enjoyed the video too, joining other comments… matter was created by energy… DNA has a collective learning (consciousness), so human consciousness is next step in collective learning… humans are biomachines… another step – designing more efficient bio-machine combined with the new quality of consciousness and with a higher ability for collective learning?? possible…
    also yes we can destroy ourselves through wars but also as Japanese recent events show… we need to be more aware of creating catastrophic events when the destructive force of our technology gets triggered by the natural disasters (above average scale)… not sure we do enough to be prepared…

  6. @solerena I would say we are limited by the ability of individual members of our species to attain knowledge, process it and discover new knowledge. While I think interaction among members, a kind of human cloud sourcing, can somewhat compensate I think our own evolved limitations will motivate AI. If we can create a silicon analog of the human mind we can improve upon it (in the sense of information storage and breadth of learned processing algorithms).

    Also along the lines of natural disasters triggering disasters in our infrastructure. A professor of mine (one specializing in complex systems) pointed out that the more we try to protect ourselves from infrastructure failures the more vulnerable we become to catastrophes. When you have eliminated all the small scale failure modes then the only ones left are large scale.

  7. agree with both statements…
    thinking further: the only thing what we do understand about human mind is a very small portion of what we do not (this is rather obvious), so analog, quantum or something else – in a new paradigm – probably something else… just plain observation of how fast things move… another step will be not in the realm of evolution but total transformation of the existing system… I see it as something truly beautiful… cannot leave the esthetical component behind… as a matter of personal preference…

    also loved the illustrations of collective learning…make perfect sense:D

  8. [http://www.flickr.com/photos/physicsman/] – reminds me of monocultures (like Windows or modern corn) being prone to catastrophic collapse… We may be better off co-evolving with our GMOs than hoping to sequester them. And I suspect we will evolve an AI rather than design it, which beckons the same questions. (more on this here (my Google Tech Talk) and long ago)

  9. It might be like evolving a caterpillar which will turn itself into a butterfly:)

  10. [http://www.flickr.com/photos/jurvetson]

    Fantastic talk!

    "FPGA RF coupled to neighboring cells" <– That is soooo cool.

    I understand an evolved AI not being able to cope outside it’s original selection pressure environment. We could evolve an AI whose selection pressure was the ability to control and utilize a complex and robust robotic body (i.e. environmental adaptation is the selection pressure). I suppose that is at odds with your "brain in the box" statement. We could suppose a situation where the box co-evolves with the brain. This brings to mind a particular theory for human evolution where our brain co-evolved with our hand in a kind of self reinforcement process. In essences the ability to adapt drives a selection process which selects for a more robust adaptation scheme.

    Really great talk. I was hooked after the bit about networking and load distribution (which I thought was a fantastic infrastructure paradigm) and then it just got better.

  11. thanks!

    Rodney Brooks and several robot folk would agree with you, and would conclude that we will only design an AI with human-recognizable interfaces to the world if it grows up with human-recognizable interfaces to the world (with visual-spatial interaction and the resulting representations being one example).

    From humanoids to helicopters
    Wandering in Willow Garage Strandbeest Early Replicants Robo Helicopter Dance

    When we evolve a complex system, we get a black box defined by its interfaces. We cannot easily apply our design intuition to improve upon its inner workings. (from blog)

    [http://www.flickr.com/photos/24270806@N06] – it’s more like parenting than design, moreof a process than a product… We’ll know if we succeed when the AI is able to predict the future.

  12. ha, and if all AI will follow Bill Gates, they will talk with their hands like in the portraits above – would be funny:D most of human communication is non-verbal:) large part of human mental processes are subconscious… wonder what AI will dream about?

    baby AIs could be like little ducks following whatever appears to be a big duck… who knows… it will be easier for them to recognize basic trends as well as patterns and slice and dice data very fast, so from this rational analytical angle they will be able to predict future… but as for the dance of subconscious goes, i am not really sure… how it works… they will develop their own pool or start tapping into human first… this is where it is a mystery and a black box…

  13. thanks…. and now this photo illustrates the big history sites.

Leave a Reply to jurvetson Cancel reply

Your email address will not be published. Required fields are marked *