From today’s news: “AI can now read emotions—but should it? Specifically, the researchers said emotion recognition technology should not be used in decisions that ‘impact people’s lives and access to opportunities,’ such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions.”

It reminds me of Yuval Noah Harari’s warning in 21 Lessons for the 21st Century: “AI is now beginning to outperform humans in the understanding of human emotions. In particular, AI can be better at jobs that demand intuitions about other people.

Feelings guide not just voters but their leaders as well. This reliance on the heart might prove to be the Achilles’ heel of liberal democracy. For once somebody (whether in Beijing or San Francisco) gains the technological ability to hack and manipulate the human heart, democratic politics will mutate into an emotional puppet show.” (pp.20,21,46)

And here’s the crazy part… that photo is 8 years old, before the deep leaning boom. My deceased dad is to my left.

Here’s my flickr summary from 2012:

As we walk by the camera, the TV tracks the faces and displays in real-time:
• Emotional attributes of affect: I generally peg the happiness scale
• Age: 38 +/- 12 (their mean error is 6.85 years, and this guess was within 6 years of accuracy)
• Gender: Male (94.3% accuracy)
• ID: Assigned on the fly I presume, to track me over time (91.5% detection rate)
• Uptime: how long it has been tracking me, often unawares as I had conversations off in the distance.

4 responses to “Ghost in the Machine — me and my Dad”

  1. From a recent Podcast with Tristan Harris:
    "We were all keeping an eye out for 1984 and we thought about the dystopia that we would get was the Big Brother one. But alongside Orwell’s dark vision, there was this other slightly older and less well known but equally chilling vision of Huxley’s Brave New World. It’s Aldous Huxley. He summarized it this way, it’s beautiful. It says:

    “What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny ‘failed to take into account man’s almost infinite appetite for distractions.’”

    He ends by saying:

    “Orwell feared those that what we fear will ruin us, Huxley feared that what we desire will ruin us.”

    And that’s essentially the premise of the work. It’s like there’s two ways to fail here. As in most systems, there’s almost always two ways to fail. One way to fail is the authoritarian Big Brother censorship sort of mode with so little information that we don’t have any and we’re all restricted and top down controlled, et cetera.

    But then the bottom-up way to fail is just overwhelmed in irrelevance, in distraction, in overstimulating our magic trick sort of brain with paleolithic social validation and tribal warfare and moral outrage, and all that stuff that isn’t actually adding up to anything. And human agency, which is unique to the world like choice is that thing that’s sitting in between those two worlds, informed effective choice, good choice.

    That’s what we right now like as a human civilization. That’s where we got to be because those other two models are really bad and self-terminating in some cases if we cannot — my biggest fear about these issues is we have to be able to agree on a common reality, a common truth, because that’s the only way. If we don’t agree on what’s real, if we don’t believe there is truth, then we can’t construct shared agendas to solve problems like inequality or climate change or whatever. We have real problems, and we have to find ways that we actually can see those agreements and then construct actions together to change it.

    I think that right now the technology is kind of taking us away from that. But the reason that we work on these topics is I want to live in a world where technology is giving us the superpowers to do that, superpowers for common ground, superpowers for constructing shared agenda, superpowers for instead of getting learned helplessness by seeing climate change news pounded into our nervous systems, dosed to two billion people a day, to instead have mass empowerment like mass coordinated action that we can all take and feel optimistic about. All the progress we’re making, and all the things we can do next.

    That’s kind of the project here. It’s like we are trapped in this one paleolithic meat suit that’s got these kinds of bends and contortions that bend reality in a way that can be hacked. And we can also use those bends and contortions in a way that gives us the most empowerment. And if we ever needed those superpowers, it’s right now.

    Tim Ferriss: This is a perfect segue. I’m going to start by finishing the quote that I ended up only reading partially. It’s Big Brother. This is from Chuck Palahniuk:

    “Big Brother isn’t watching. (This is very close to what you were just saying with Huxley.) He’s singing and dancing. He’s pulling rabbits out of a hat. Big Brother’s busy holding your attention every moment you’re awake. He’s making sure you’re always distracted. He’s making sure you’re fully absorbed. He’s making sure your imagination withers. Until it’s as useful as your appendix.”

  2. Communists and Libertarians be pointing fingers and avoiding introspection… From George Soros at WEF:

    "I want to call attention to the mortal danger facing open societies from the instruments of control that machine learning and artificial intelligence can put in the hands of repressive regimes. I’ll focus on China, where Xi Jinping wants a one-party state to reign supreme.

    A lot of things have happened since last year and I’ve learned a lot about the shape that totalitarian control is going to take in China.

    All the rapidly expanding information available about a person is going to be consolidated in a centralized database to create a “social credit system.” Based on that data, people will be evaluated by algorithms that will determine whether they pose a threat to the one-party state. People will then be treated accordingly.

    China isn’t the only authoritarian regime in the world, but it’s undoubtedly the wealthiest, strongest and most developed in machine learning and artificial intelligence. This makes Xi Jinping the most dangerous opponent of those who believe in the concept of open society. But Xi isn’t alone. Authoritarian regimes are proliferating all over the world and if they succeed, they will become totalitarian.

    What I find particularly disturbing is that the instruments of control developed by artificial intelligence give an inherent advantage to authoritarian regimes over open societies. For them, instruments of control provide a useful tool; for open societies, they pose a mortal threat.

    I’ve been concentrating on China, but open societies have many more enemies, Putin’s Russia foremost among them. And the most dangerous scenario is when these enemies conspire with, and learn from, each other on how to better oppress their people.

    Since Xi is the most dangerous enemy of the open society, we must pin our hopes on the Chinese people, and especially on the business community and a political elite willing to uphold the Confucian tradition.

    This doesn’t mean that those of us who believe in the open society should remain passive. The reality is that we are in a Cold War that threatens to turn into a hot one. On the other hand, if Xi and Trump were no longer in power, an opportunity would present itself to develop greater cooperation between the two cyber-superpowers.

    It is possible to dream of something similar to the United Nations Treaty that arose out of the Second World War. This would be the appropriate ending to the current cycle of conflict between the US and China. It would reestablish international cooperation and allow open societies to flourish. That sums up my message."

  3. Good to see the 38 year old guy in the pic pegged Happy! Great write ups shared. Thank you Steve.

  4. With all "leagaly obtained" data on each individual from "free applications" and illegaly obtained data through hackers (medical data), not only will AI be precise, but able to control your every decision in life, like business investments, soul mate selection, etc., through in depth knowledge of your unique emotions and your cognitive reactions. By associating your pulse rate from your smart watch, with the article you are reading and your facial reactions from your pc camera, there will be no room for error.

Leave a Reply

Your email address will not be published. Required fields are marked *