iPhone XS
ƒ/1.8
4.25 mm
1/30
640

The debut of DigiDoug, a 3D digital twin rendered in real-time in incredible detail, on stage at TED2019 and back stage in VR conferencing. The current 1/6 second time lag will shrink with each turn of Moore’s Law. See a supercool demo of the deepfake future, today.

“In an astonishing talk and tech demo, software researcher Doug Roble debuts “DigiDoug”: a real-time, 3-D, digital rendering of his likeness that’s accurate down to the scale of pores and wrinkles. Powered by an inertial motion capture suit, deep neural networks and enormous amounts of data, DigiDoug renders the real Doug’s emotions (and even how his blood flows and eyelashes move) in striking detail. Learn more about how this exciting tech was built — and its applications in movies, virtual assistants and beyond.” — TED Talks

One response to “DigiDoug DeepFake at TED2019”

  1. Backstage for the VR interaction My reaction when he leaned in… and it felt like he got right up into my personal space.Digi-Doug, getting into my personal spaceMore. I then got up close to his face and inspected his facial pores and eyelashes. This could allow for VR videoconferencing, using the digital double to render the face of each participant (removing the VR rig that obscures the face from any simple camera view)

    Pulling the headgear off -> interesting transition for DigiDoug

Leave a Reply

Your email address will not be published. Required fields are marked *