AIs will rule the world by unconsciously manipulating humans, not as robot overlords

It's not the Terminator you need to worry about, it's the politicians using AIs to make all decisions who will let AI rule the world.

A lot of folks, myself included, like to wonder about how to make AIs that won't harm us. But today, the issue is not really about good or bad AI. It's what good or bad people will do with AI. This is the real question in AI safety. Long before we worry about a super-intelligent AI overlord, AIs are going to be effectively controlling the world through human beings. They won't be aware, or willful. Rather they will be indispensable tools whose results will control most aspects of policy, economics, government and especially politics. You won't get elected if you don't say what your AI tells you to say, because your opponent's AI will defeat you.

This story was published On Forbes but is currently unavailable, I include it below

There is a rising field called "AI Safety." It covers many things, from dangers arising from modern day very simple AI, to worrying about what happens when machines get high intelligence and will, and might act against the interest of humans.

That latter case is fascinating but very far from the AI we have of today. Today's AI is a tool, with no will of its own, and it's mostly just the application of fairly powerful statistical methods if you strip away the hype.

The really meaningful problem today is not so much how to deal with robots that arise and say "kill all humans." It's not a question of good or bad AI. It's a question of what good and bad people will do when empowered by AI tools.

Long before you can worry about being ruled by robot overlord, you should worry about what a human overlord can do to rule you using AI and robots. That's because history has had no shortage of people trying to be human overlords.

Others have already sounded the alarm about autonomous military robots and the danger they create. The military is very aware of this, and has a policy of not allowing robots to make kill decisions. They want this to always be left to a human warfighter. Unfortunately, everybody is afraid that if two evenly matched forces engage, and one allows its robots to make kill decisions, and the other doesn't, the ethical army will fall, making this sort of policy short lived.

It is also frightening to realize that ever since the musket was created, it has been impossible for an overlord to rule a people without raising an army from those people. (Prior to that, mounted knights were so effective that a single knight could take on an entire village of peasants.) If you have to raise an army from the people, there are limits on how badly you can treat the people. Not enough limits of course, but still limits. Once a single person or small group can have their hands on a robot army, this rule goes away. We must work hard to ensure that robot armies programmed to obey only one ruler or a small group ever exist. That's very much at odds with our military structures today.

That's very frightening, but before we have real robot battles, there is another way that AI can, and is being used to rule the world. That's using AI to help humans control other humans. We've seen the first taste of this in the efforts to run smart and individually targeted propaganda campaigns to affect elections in other countries.

Manipulation is overtly evil, and because of that we're going to fight to stop it. But there is a way that AI will rule the world that is more likely, because we will embrace it.

We're going to ask AIs how to make the world "better." Not in some science-fiction way where we ask the AI for the answer to the ultimate question and it spews out a baffling number. We're going to ask AIs to look for optimal solutions not just to technical problems but social and political ones. And then humans are going to implement those solutions, using the same authority structures humans have always used. We won't always understand why the AI's solution is optimal, though we will hopefully be able to see the structure of why. Then again, we don't often know why the solutions chosen by parliaments and kings are good, either, and neither do the parliaments and kings.

AIs will begin to get better and better and modeling human behaviour and desires. They will start to answer questions like, "If we implement policy X, then 23% of people in town Y will love it, 18% will be neutral and 59% will dislike it, but in town Z, it's the reverse." They might say that models predict that certain groups of people will initially dislike but come to like a plan over time as it demonstrates results. The algorithms will understand us better and better. They may not predict any one individual perfectly, but they will get good at groups.

Science Fiction fans are familiar with perhaps the most famous novel series of the early days of SF, known as the Foundation series by Isaac Asimov. Those stories concern a scientist named Hari Seldon who uses psychology, economics and history to build models of group views and activity that are so good that he can predict the future of societies. That's still very much fiction today, but we can see tastes of what is to come. Who hasn't been frustrated at how disturbingly accurate traffic tools like Waze are at predicting, to the minute, when you will arrive after an hour long drive in complex traffic?

It's not far from "what happens if we do X?" to "search a wide variety of approaches and report which X give the best result." Today, a hot field in AI involves combining AI tools that generate a wide variety of useful proposals and other AI tools which evaluate how good they at the task at hand. If you've seen creepy photographs of things that never existed, it was probably made this way.

As AI moves further into modeling people, economics and politics and starts to do it at scale, we will find it impossible to govern without it. It will be rare for a political or large business decision to be made without it. This will be true in politics even if it is not done overtly by parliaments, because all political candidates will use such tools to test what platforms and campaign techniques are most likely to get them elected. They will be competing with other candidates doing the same thing, and the winner may well be not the best candidate, but the politician whose AI is best at predicting what to say to get the votes.

We already see that today with polling. As polling gets better and better, candidates test ideas in polls. In 2-candidate races (like U.S. Presidential Elections) candidates adjust their positions so that they will get 51% of the vote. They don't want 49% of course, and neither do they want 52%, because it involves adjusting their position too much. This has resulted in almost all US Presidential elections of recent days being near-ties,as suggested by Danny Hillis,

The political AIs will make use of polling data, social network data and the public profiles and footprints of every citizen to make models of how they are likely to react to different political proposals. They may even do fake polls which seem anonymous but actually gather specific data on individuals to improve and refine the models. They won't be perfect, but that won't stop them from being essential.

It's not all bad. For example, such tools, in the hands of economists and policy crafters in government, will use them to help solve hard governmental problems. One of the problems they will probably solve, ironically, is "what's the best plan to minimize the disruption that comes from AIs displacing jobs?"

These AIs won't be aware, or that smart, or have any will. But before long they will be running the world.

Add new comment