An AI Vignette

I think the most depressing part isn’t that we’ve become playthings for god like entities, it’s that we’re play things for god like entities who mostly care that we share their opinions about makeup.

Back up a bit. In 2027 researchers at DeepMind created an AI that was really good at playing a new strategy game called the Ideological Turing Test. The objective of the game was to so accurately model the viewpoint of someone who holds the opposite political beliefs that they believed you also held that view. Up until that point Chatbots had achieved human level performance on conversational metrics such they could hold a light conversation with anyone, but they were still at the end of the day sophisticated babblers. They didn’t try to bring the conversation anywhere, rather they just batted the conversation ball back at you.

The DeepMind “Charisma” team had a breakthrough, modeling the game of a conversation as a debate. I’m a little unclear on the technical detail but the Argument bot would model the argument as a series of games to be played against another bot, and developed compelling arguments for each side, scored by how compelling it was.

And I mean, it got really compelling. The thing that those nerds in High School Speech and Debate forget is its not about winning the argument, it’s about making the other person not even think there was an argument in the first place. That our values were actually aligned to begin with and that I’m just showing you a better way to express those values. People have natural safeguards in place to prevent us using that level of sociopathy; Argument, well, it didn’t care one way or another it just found a higher scoring strategy.

The implications were obvious and DeepMind and other labs promised they wouldn’t be the ones to use these techniques on others, and in congressional testimony after news of it leaked they swore they were just researching it to get better at AI Safety debate, a technique for improving robustness. And the project lead was surprisingly convincing, assuring people that just like with earliest versions of deep fakes, they wouldn’t allow the tech to be used in projects.

But in 2030 COVID-20 occured - why did we never get around to closing those wet markets??!? - and a vocal majority wanted to make sure that we all actually stay six feet away from each other. Gvt’s didn’t have the technology or the competency to deploy a major public health campaign, but Deepmind had a simple and straightforward way to get everyone to comply; by convincing them it was the right thing to do! The Charisma team added a new module to the latest Google Home bots, which were well integrated by that point into many consumer electronices.

The bot was sophisticated in its entreaties. To the contrarians, it spoke fiery words about this being a rebellion against the [INSERT COMPANY/GOVERNMENT] that didn’t care; to the communitarians it asserted this as the ultimate triumph of group norms. And to the people who were too busy to engage with it? Why they still had friends and family members who could be convinced to reach out to their wayward compadre’s and spread the good word.

Compliance with the mandate was extremely high, and when the time came to relax the policy, this was another good use for the Charismatic entreaties. But once let loose, the technology wouldn’t just go away. Other versions of the tech popped up, some of it mild enough that there was a grey line between “superhuman charisma” and “pretty good marketing”.

A couple of years later and soon the discourse was dominated by shockingly effective appeals, buffeting people around like strands in a gust storm, but the gust storm in this case is beautiful poetry on how this Apple iPhone is the greatest expression of your self, but oh wait maybe it’s actually this lip gloss! The Charismatic Bots generating this couldn’t be thought of as agents in themselves, more like cursed magical typewriters that, if someone gave it a prompt, would create the best most strategic argument to convince a given person that the prompt was true.

Like a lot of computer worms, some of these supercharged memetic ones tried to close the security hole behind them, making convincing cases that the reader shouldn’t consume any other media besides the true voices from [INSERT FOX/MSNBC HERE]. And that worked up to a point - while a lot of people had been conviced to give all their money to [INSERT GOOD/BAD ACTOR], others had been immunized by a competing memeplex to ignore [INSERT GOOD/BAD ACTOR]’s arguments that they should have your power of attorney. But it’s hard to memetically wall someone off; that stray pamphlet blowing through the wind might have something really persuasive to say on the benefits of a new car.

So yeah, here I am, sitting on my [INSERT BRAND OF CHAIR], reading a diatribe on [INSERT POLITICAL EVENT], not really sure anymore if the thing I’m getting so [INSERT ANGRY/HAPPY] is something I would’ve cared about a year ago, a day ago. We worried about the AI being too convincing and getting out of the box, but it actually was too convincing for us to keep it out of our boxes (The box is a metaphor.)

Inspired by Paul Christiano’s What Failure Looks Like - Get what you measure