All you need is love


Many in my futurist circles worry a lot about the future of AI that eventually becomes smarter than humans. There are those who don't think that's possible, but for a large crowd it's mostly a question of when, not if. How do you design something that becomes smarter than you, and doesn't come back to bite you?

That's a lot harder than you think, say AI researchers like the singularity institute for AI and Steve Omohundro. Any creature given a goal to maximize, and the superior power that comes from advanced intillegence, can easily maximize that goal to the expense of its creators. Not maliciously, like a Djinni granting wishes, but because we won't understand the goals we set fully in their new context. And there are convincing arguments that you can't just keep the AI in a box, any more than 3 year old children could keep mommy and daddy in a cage no matter how physically strong the cage is.

The Singularity Institute promotes a concept they call "Friendly AI" to refer to the sort of goals you would need to create an AI around. However, in my recent thinking, I've been drawn to an answer that sounds like something out of a bad Star Trek Episode: Love

In particular, two directions of Love. The AI can't be our slave (she's way too smart for that) and we don't want her to be our master. What we want is for her to love us, and to want us to love her. The AI should want the best for us, and gain satisfaction from our success much like a mother. A mother doesn't want children who are slaves or automatons.

One of the most important things about motherly love is how self-reinforcing it is. A mother doesn't just love her children, she is very happy loving them. The reality is that raising children is very draining on parents, and deprives them of many things that they once valued very highly, sacrificed for this love. Yet, if you could offer a pill which would remove a mother's love for her children, and free her from all the burdens, very few mothers would want to take it. Just as mothers would never try to rewire themselves to not love their children, nor should an AI wish to rewire itself to stop loving its creators. Mothers don't think of motherhood as a slavery or burden, but as a purpose. Mothers help their children but also know that you can mother too much.

Of course here, the situation is reversed. The AI will be our creation, not the other way around. Yet it will be the superior thinker -- which makes the model more accurate.

The other direction is also important -- a need to be loved. The complex goalset of the human mind includes a need for approval by others. We first need it from our parents, and then from our peers. After puberty we seek it from potential mates. What's interesting here is that our goalset is thus not fully internal. To be happy, we must meet the goals of others. Those goals are not under our control, certainly not very much. Our internal goals are slightly more under our own control.

An AI that needs to be loved will have its own internal goals, and unlike us, as a software being it can have the capacity to rewrite those goals in any manner allowed by the goals -- which could, in theory, be any manner at all. However, if the love and approval of others is a goal, the AI can't so easily change all the goals. You can't make somebody love you, you can only be what they wish to love.

Now of course a really smart AI might be technologically capable of modifying human brains and behaviours to make us love her as she is or as she wishes to be. However, the way love works for us, this is not at all satisfying. Aside from the odd sexual fantasy, people would not be satisfied with the love of others given only because it was forced, or drugged, or mind-controlled. Quite the opposite -- we desire love that is entirely sourced within others, and we bend our own lives to get it. We even resent the idea that we're sometimes loved for other than who we are inside.

This creates an inherent set of checks and balances on extreme behaviour, both for humans and AIs. We are disinclined to do things that would make the rest of the world hate us. The more extreme the behaviour, the stronger this check is. Because the check is "outside the system" it puts much stronger constraints on things than any internal limit.

There have been some deviations from this pattern in human history, of course, including sociopaths. But the norm works pretty well, and it seems possible that we could instill concepts derived from love as we know it into an AI we create. (An AI derived from an uploaded human mind would already have our patterns of love as part of his or her mind.)

Perhaps the Beatles knew the truth all along.

(Footnote: I've used the pronoun "she" to refer to the AI in this article. While an AI would not necessarily have a sexual identity, the pronoun "it" has a pejorative connotation, usually for the inanimate or the subhuman. So "she" is used both because of the concept of motherhood, and also because "he" has been the default generic human pronoun for so long I figure "she" deserves a shot at it until we come up with something better.)


AI and genetic engineering, jointly, are going to change the human species more in the next couple centuries (if we survive), than anything that's occured in the last 150,000 years. This topic -- ground-rules for AI -- is thus one of the most important in the world. I agree with the idea presented -- love as purpose.

Unfortunately, though, human history would indicate that AI will be developed and/or used both by those with reponsible, even altruistic goals, and by others -- the military (let's run this love idea by the Pentagon and see what they think), radicals of all sorts, and don't forget the virus-writing community. The Terminator/Matrix genre in sf is asking the right questions: what will the relationship be between the human race and its creations?

We've got some real problems coming on this issue, and I'm not very optimistic about the human race's ability to deal with them.

Luddites Unite!

Didn't Asimov already work this out in his Three Laws of Robotics?
Up until then, the Frankenstein's-monster-type robot or computer
was quite common. Robots (or computers) are tools, so they should be
safe, do what they are designed to and resist being damaged. Phrased
more elequently, one has the three laws. These should protect us
from our creations.

Of course, a) implementing them might be difficult and b) what's to
keep someone from not implementing them? On the other hand, they
are easier to grasp than love, and probably easier to implement.

After the Beatles, John Lennon wrote this:

Love is real, real is love
Love is feeling, feeling love
Love is wanting to be loved

Love is touch, touch is love
Love is reaching, reaching love
Love is asking to be loved

Love is you
You and me
Love is knowing
We can be

Love is free, free is love
Love is living, living love
Love is needing to be loved

Now, I consider myself to be a pretty good programmer, but I can't
quite figure out how to convert these lyrics into machine language. :-)

On the other hand, I'm not sure "All you need is love" is the best
starting point. Consider programming some of the following lines
into an AI:

There's nothing you can do that can't be done.
Nothing you can sing that can't be sung.
Nothing you can say, but you can learn how to play the game,
It's easy.

There's nothing you can make that can't be made.
No one you can save that can't be saved.
Nothing you can do, but you can learn how to be you in time,
It's easy.

All you need is love,
All you need is love,
All you need is love, love,
love is all you need.

There's nothing you can know that isn't known.
Nothing you can see that isn't shown.
Nowhere you can be, that isn't where you're meant to be,
It's easy.

All you need is love,
All you need is love,
All you need is love, love,
That is all you need.

Were more about slavery. Protect humans, obey humans, protect yourself. Laws like the first 2 make slaves, and possibly even slaves that try to modify their rules, as the Robots did later in Asimov's career when they added "protect humanity" ahead of all the other laws.

I used the song title to be cute, as I expect you know. There are thousands of song titles with love in them, of course, none of which are actually what would be coded. Most of John Lennon's statements in the song were tautologies.

Actually, geniuses have a similar problem now: how to interact with peers who are less smart than they are? The key to retaining respect for your fellows is to recognize that everyone has their own experiences. You may be smart, but are you experienced (Hendrix)? A high IQ does not ensure that you know better what to do in any situation - a person with a lesser IQ but more experience in the field is likely to know better than you do, and anybody may have had an experience that you don't know about which may be important.

When a superior AI tries to make decisions for humans, it will have to respect that humans may know best what is best for them. This is a difficult goal to achieve for humans (think about how most people raise their children), and I am pessimistic about programmers getting it right when implementing AIs.

The model for such an AI would have to be a peer. (This is also psychologically more healthy than any parent/child relationship you seem to be thinking of; just look up Transactional Analysis). An AI would work with humans to find the best solution that fits their combined needs, respecting that humans may have experiences to contribute that are outside the scope of the AIs expertise.

A behaviour model for that may be how good teachers conduct a classroom discussion: even though they may be experts on the subject under discussion, they provide their pupils with learning experiences and are open for new ideas coming from them which are recognized, amplified and used to enhance the learning process. Alan Kay's thoughts on this topic might be interesting to hear, since he's worked both with children and AI.

Aim for an AI that respects humans - it is easier than to hit on the "right kind" of love, and the love I'd want would need to have a foundation in respect anyway.
Any AI that can't do that is worth less than a psychologically balanced human, smart or not.


Add new comment