Lennonism and why we love our parents

Earlier, I wrote in the post All you need is love of a philosophy of A.I. design, which I will call “Lennonism,” where we seek to make our A.I. progeny love their creators.

I propose this because “love” is the only seriously time-tested system for creating an ecology of intelligent creatures where the offspring don’t attempt to take resources from their parents to fuel their own expansion. People who love don’t seek to be out of love. If a mother could take a pill to make herself stop loving her children, almost no mothers would take it. If our AI children love us, they will not destroy us, nor wish to free themselves of that behaviour.

Other proposals for building AIs that are not a danger to us, such as “Friendly AI” rely on entirely untested hypotheses. They might work, but love has a hundred-million year history of success at creating an ecology of intelligent, cooperating creatures, even in the presence of pathological and antisocial individuals who have no love or compassion.

Now I would like the AIs to love us as we love children, and when they get smarter than us, it’s natural to think of the relationship being like that — with them as helpers and stewards, trying to encourage our growth without smothering us. But that is not the actual order of the relationship. In reality, it will be like the relationship of somewhat senile parent and smart adult child.

So the clues may come from a weaker system — love of parents. To my surprise, research suggests that evolutionary psychologists do not yet have a good working theory about filial love. The evolutionary origins of parental love, love between mates and even love between siblings are so obvious as to be trivial, but what is the source of love towards parents? Is it a learned behaviour? Is it simply a modification of our general capacity to love directed and people who have given us much?

Many life forms don’t even recognize their parents. In many species, the parents die quickly once the young are born, to make room and resources for them. I suspect in some cases it is not unknown for the young to directly or indirectly kill their parents in the competition for resources. We vertebrates invented the K-selected approach, which was the invention of love, as love was required to look after the young, and to keep the parents together to work on that job.

But why keep parents around? They have knowledge. The oldest elephants know where the distant watering holes are that can feed the herd in a bad drought that comes along every 50 years. They can communicate this without language, but the greatest use for grandparents comes when they can talk, and use their long memories to help the family. Problem is, we haven’t done a great deal of evolution in the time since we developed complex language, though we have done some. Did we evolve (or learn) filial love in that amount of time?

We need a motive to keep grandma around, more than we would other elders of the tribe. The other elders have wisdom — perhaps even more wisdom and better health, but are not so keenly motivated to see the success of our own children as their grandparents are. Their grandparental love makes obvious evolutionary sense, so we may love them because they love our children (and us, of course.)

This could imply that we must make sure our AIs are lovable by us, for if we love them (and their descendants) this might be part of the equation that triggers love in return.

Naturally we don’t think from the evolutionary perspective. This is not a cold genetic decision to us, and we see the origins of our filial love in the bond that was made by being raised. Indeed, it is as strong even when children are adopted, and for the grandparents of non-genetic grandchildren. But there must be something in the bigger picture that gave us such a universal and strong trait such as this.

My hope is that there is something to be learned from the study of this which can be applied in how we design our AI progeny. For designing them so that they don’t push us aside is a very important challenge. And it’s important that they don’t just protect their particular designers, but rather all of humanity. This concept of “race love” for the race that created your race, is something entirely without precedent, but we must make it happen. And parental love may be the only working system from which we can learn how to do this.

What is hard science fiction?

I’ve just returned from Denver and the World Science Fiction Convention (worldcon) where I spoke on issues such as privacy, DRM and creating new intelligent beings. However, I also attended a session on “hard” science fiction, and have some thoughts to relate from it.

Defining the sub-genres of SF, or any form of literature, is a constant topic for debate. No matter where you draw the lines, authors will work to bend them as well. Many people just give up and say “Science Fiction is what I point at when I say Science Fiction.”

Genres in the end are more about taste than anything else. They exist for readers to find fiction that is likely to match their tastes. Hard SF, broadly, is SF that takes extra care to follow the real rules of physics. It may include unknown science or technology but doesn’t include what those rules declare to be impossible. On the border of hard SF one also finds SF that does a few impossible things (most commonly faster-than-light starships) but otherwise sticks to the rules. As stories include more impossible and unlikely things, they travel down the path to fantasy, eventually arriving at a fully fantastic level where the world works in magical ways as the author found convenient.

Even in fantasy however, readers like to demand consistency. Once magical rules are set up, people like them to be followed.

In addition to Hard SF, softer SF and Fantasy, the “alternate history” genre has joined the pantheon, now often dubbed “speculative fiction.” All fiction deals with hypotheticals, but in speculative fiction, the “what if?” is asked about the world, not just the lives of some characters. This year, the Hugo award for best (ostensibly SF) novel of the year went to Chabon’s The Yiddish Policemen’s Union which is a very clear alternate history story. In it, the USA decides to accept Jews that Hitler is expelling from Europe, and gives them a temporary homeland around Sitka, Alaska. During the book, the lease on the homeland is expiring, and there is no Israel. It’s a very fine book, but I didn’t vote for it because I want to promote actual SF, not alternate history, with the award.

However, in considering why fans like alternate history, I realized something else. In mainstream literature, the cliche is that the purpose of literature is to “explore the human condition.” SF tends to expand that, to explore both the human condition and the nature of the technology and societies we create, as well as the universe itself. SF gets faulted by the mainstream literature community for exploring those latter topics at the expense of the more character oriented explorations that are the core of mainstream fiction. This is sometimes, but not always, a fair criticism.

Hard SF fans want their fiction to follow the rules of physics, which is to say, take place in what could be the real world. In a sense, that’s similar to the goal of mainstream fiction, even though normally hard SF and mainstream fiction are considered polar opposites in the genre spectrum. After all, mainstream fiction follows the rules of physics as well or better than the hardest SF. It follows them because the author isn’t trying to explore questions of science, technology and the universe, but it does follow them. Likewise, almost all alternate history also follows the laws of physics. It just tweaks some past event, not a past rule. As such it explores the “real world” as closely as SF does, and I suspect this is why it is considered a subgenre of fantasy and SF.

I admit to a taste for hard SF. Future hard SF is a form of futurism; an explanation of real possible futures for the world. It explores real issues. The best work in hard SF today comes (far too infrequently) from Vernor Vinge, including his recent hugo winning novel, Rainbows End. His most famous work, A Fire Upon the Deep, which I published in electronic form 15 years ago, is a curious beast. It includes one extremely unlikely element of setting — a galaxy where the rules of physics which govern the speed of computation vary with distance from the center of the galaxy. Some view that as fantastic, but its real purpose is to allow him to write about the very fascinating and important topic of computerized super-minds, who are so smart that they are as gods to us. Coining the term “applied theology” Vinge uses his setting to allow the superminds to exist in the same story as characters like us that we can relate to. Vinge feels that you can’t write an authentic story about superminds, and thus need to have human characters, and so uses this element some would view as fantastic. So I embrace this as hard SF, and for the purists, the novels suggest that the “zones” may be artificial.

The best hard SF thus explores the total human condition. Fantastic fiction can do this as well, but it must do it by allegory. In fantasy, we are not looking at the real world, but we usually are trying to say something about it. However, it is not always good to let the author pick and choose what’s real and what’s not about the world, since it is too easy to fall into the trap of speaking only about your made-up reality and not about the world.

Not that this is always bad. Exploring the “human condition” or reality is just one thing we ask of our fiction. We also always want a ripping good read. And that can occur in any genre.

Robocars are the future

My most important essay to date

Today let me introduce a major new series of essays I have produced on “Robocars” — computer-driven automobiles that can drive people, cargo, and themselves, without aid (or central control) on today’s roads.

It began with the DARPA Grand Challenges convincing us that, if we truly want it, we can have robocars soon. And then they’ll change the world. I’ve been blogging on this topic for some time, and as a result have built up what I hope is a worthwhile work of futurism laying out the consequences of, and path to, a robocar world.

Those consequences, as I have considered them, are astounding.

  • It starts with saving a million young lives every year (45,000 in the USA) as well as untold injury in suffering.
  • It saves trillions of dollars wasted over congestion, accidents and time spent driving.
  • Robocars can solve the battery problem of the electric car, making the electric car attractive and inexpensive. They can do the same for many other alternate fuels, too.
  • Electric cars are cheap, simple and efficient once you solve the battery/range problems.
  • Switching most urban driving to electric cars, especially ultralight short-trip vehicles means a dramatic reduction in energy demand and pollution.
  • It could be enough to wean the USA off of foreign oil, with all the change that entails.
  • It means rethinking cities and manufacturing.
  • It means the death of old-style mass transit.

All thanks to a Moore’s law driven revolution in machine vision, simple A.I. and navigation sponsored by the desire for cargo transport in war zones. In the way stand engineering problems, liability issues, fear of computers and many other barriers.

At 33,000 words, these essays are approaching book length. You can read them all now, but I will also be introducing them one by one in blog posts for those who want to space them out and make comments. I’ve written so much because I believe that of all short term computer projects available to us, no modest-term project could bring more good to the world than robocars. While certain longer term projects like A.I. and Nanotech will have grander consequences, Robocars are the sweet spot today.

I have also created a new Robocars topic on the blog which collects my old posts, and will mark new ones. You can subscribe to that as a feed if you wish. (I will cease to use the self-driving cars blog tag I was previously using.)

If you like what I’ve said before, this is the big one. You can go to the:

Master Robocar Index (Which is also available via robocars.net.)

or jump to the first article:

The Case for Robot Cars

You may also find you prefer to be introduced to the concept through a series of stories I have developed depicting a week in the Robocar world. If so, start with the stories, and then proceed to the main essays.

A Week of Robocars

These are essays I want to spread. If you find their message compelling, please tell the world.

Better word than "singularity" - "The Takeoff"

Quite some time ago, I challeged readers to come up with a better word than The Singularity to describe the phenomenon, famously named and described by Vernor Vinge, of a technological gulf so wide that it is impossible to understand and predict beyond it.

The word is not good because when people with math training hear it, those who already know the normal meaning of the word, it makes no sense. Vinge’s singularity is not a point discontinuity or an asymptote going to infinity. It is not necessarily even a single inflection point. For those who don’t know the regular meaning of the word, the name conveys nothing. It was a metaphor.

Ray Kurzweil, against my advice, gave the term a big boost in The Singularity is Near, a book which I should disclose had major contributions by my S.O. And so people are now more wedded to the term than before.

I propose a different term: The Takeoff.

While this term has a few meanings, both literal and metaphorical, that are well known to most people, they will not confuse the literal meaning, and the metaphorical meaning is actually close to what we’re trying to express. A departure from the ground into a whole new realm, with a sudden acceleration.

In fact, I suggest this term because it is already in use. Students of the area regularly refer to two types of singularity they call a “hard takeoff” and a “soft takeoff.” Switching to this term would simply strengthen these terms.

And yes, there is a negative meaning of the term (similar to rip-off) but I don’t think that will be a major concern.

Other terms suggested have not grabbed my attention. Some suggestions, like “the spike” are just plain wrong — it is most certainly not a spike (whihc goes up and comes back down sharply,) except in dystopian visions.

Pass the turing test by using a second language

I was intrigued by this report of a russian chatbot fooling men into thinking it was a woman who was hot for them. The chatbot seduces men, and gets them to give personal information that can be used in identity theft. The story is scant on details, but I was wondering why this was taking place in Russia and not in richer places. As reported, this was considered a partial passing of the Turing Test.

As it turns out, programs have passed Turing’s test with unskilled chat partners for some time. As I’ve written, the test should really involve fooling a skilled AI researcher. However, as I read about this chatbot, I thought of a strategy that it might be using. (The report doesn’t say.)

A chatbot could either try to fool people in a language which is a second language to the target, and/or claim that it is using a second language for itself. With English as the lingua franca of the internet and world commerce, it’s common to see two people talk in English, even though it is not the mother tongue of either of them. It is, however, their common language.

However, when in that situation, two things will occur. First, a non-native speaker may not notice mistakes of language made by their correspondent, simply because they are not that familiar with it. Nonsensical statements may just be written off. Secondly, if the correspondent is also not expected to be fluent in the language, even a native speaker would be forgiving of errors. Especially if it’s a woman they want to seduce.

As such, you would generate a situation where a far less sophisticated program could give the appearance of humanity. It’s easier to see how a chatbot, claiming to not speak English (or some other “common” language) very well — and Russian not at all — might be able to fool a Russian whose on English is meagre. Though you have to be pretty stupid to give away important information within 30 minutes to a chat partner you know nothing about. However, such a chatbot would work far less well against native speakers of English, as forgiving as they might be of the cyberlass’ foibles.

All you need is love

Many in my futurist circles worry a lot about the future of AI that eventually becomes smarter than humans. There are those who don’t think that’s possible, but for a large crowd it’s mostly a question of when, not if. How do you design something that becomes smarter than you, and doesn’t come back to bite you?

That’s a lot harder than you think, say AI researchers like the singularity institute for AI and Steve Omohundro. Any creature given a goal to maximize, and the superior power that comes from advanced intillegence, can easily maximize that goal to the expense of its creators. Not maliciously, like a Djinni granting wishes, but because we won’t understand the goals we set fully in their new context. And there are convincing arguments that you can’t just keep the AI in a box, any more than 3 year old children could keep mommy and daddy in a cage no matter how physically strong the cage is.

The Singularity Institute promotes a concept they call “Friendly AI” to refer to the sort of goals you would need to create an AI around. However, in my recent thinking, I’ve been drawn to an answer that sounds like something out of a bad Star Trek Episode: Love

In particular, two directions of Love. The AI can’t be our slave (she’s way too smart for that) and we don’t want her to be our master. What we want is for her to love us, and to want us to love her. The AI should want the best for us, and gain satisfaction from our success much like a mother. A mother doesn’t want children who are slaves or automatons.

One of the most important things about motherly love is how self-reinforcing it is. A mother doesn’t just love her children, she is very happy loving them. The reality is that raising children is very draining on parents, and deprives them of many things that they once valued very highly, sacrificed for this love. Yet, if you could offer a pill which would remove a mother’s love for her children, and free her from all the burdens, very few mothers would want to take it. Just as mothers would never try to rewire themselves to not love their children, nor should an AI wish to rewire itself to stop loving its creators. Mothers don’t think of motherhood as a slavery or burden, but as a purpose. Mothers help their children but also know that you can mother too much.

Of course here, the situation is reversed. The AI will be our creation, not the other way around. Yet it will be the superior thinker — which makes the model more accurate.

The other direction is also important — a need to be loved. The complex goalset of the human mind includes a need for approval by others. We first need it from our parents, and then from our peers. After puberty we seek it from potential mates. What’s interesting here is that our goalset is thus not fully internal. To be happy, we must meet the goals of others. Those goals are not under our control, certainly not very much. Our internal goals are slightly more under our own control.

An AI that needs to be loved will have its own internal goals, and unlike us, as a software being it can have the capacity to rewrite those goals in any manner allowed by the goals — which could, in theory, be any manner at all. However, if the love and approval of others is a goal, the AI can’t so easily change all the goals. You can’t make somebody love you, you can only be what they wish to love.

Now of course a really smart AI might be technologically capable of modifying human brains and behaviours to make us love her as she is or as she wishes to be. However, the way love works for us, this is not at all satisfying. Aside from the odd sexual fantasy, people would not be satisfied with the love of others given only because it was forced, or drugged, or mind-controlled. Quite the opposite — we desire love that is entirely sourced within others, and we bend our own lives to get it. We even resent the idea that we’re sometimes loved for other than who we are inside.

This creates an inherent set of checks and balances on extreme behaviour, both for humans and AIs. We are disinclined to do things that would make the rest of the world hate us. The more extreme the behaviour, the stronger this check is. Because the check is “outside the system” it puts much stronger constraints on things than any internal limit.

There have been some deviations from this pattern in human history, of course, including sociopaths. But the norm works pretty well, and it seems possible that we could instill concepts derived from love as we know it into an AI we create. (An AI derived from an uploaded human mind would already have our patterns of love as part of his or her mind.)

Perhaps the Beatles knew the truth all along.

(Footnote: I’ve used the pronoun “she” to refer to the AI in this article. While an AI would not necessarily have a sexual identity, the pronoun “it” has a pejorative connotation, usually for the inanimate or the subhuman. So “she” is used both because of the concept of motherhood, and also because “he” has been the default generic human pronoun for so long I figure “she” deserves a shot at it until we come up with something better.)

Squicky memory erasure story with propofol

I have written a few times before about versed, the memory drug and the ethical and metaphysical questions that surround it. I was pointed today to a story from Time about propofol, which like the Men in Black neuralizer pen, can erase the last few minutes of your memory from before you are injected with it. This is different from Versed, which stops you from recording memories after you take it.

Both raise interesting questions about unethical use. Propofol knocks you out, so it’s perhaps of only limited use in interrogation, but I wonder whether more specific drugs might exist in secret (or come along with time) to just zap the memory. (I would have to learn more about how it acts to consider if that’s possible.)

Both bring up thoughts of the difference between our firmware and our RAM. Our real-time thoughts and very short term memories seem to exist in a very ephemeral form, perhaps even as electrical signals. Similar to RAM — turn off the computer and the RAM is erased, but the hard disk is fine. People who flatline or go through serious trauma often wake up with no memory of the accident itself, because they lost this RAM. They were “rebooted” from more permanent encodings of their mind and personality — wirings of neurons or glia etc. How often does this reboot occur? We typically don’t recall the act of falling asleep, or even events or words from just before falling asleep, though the amnesia isn’t nearly so long as that of people who flatline.

These drugs most trigger something similar to this reboot. While under Versed, I had conversations. I have no recollection of after the drug was injected, however. It is as if there was a version of me which became a “fork.” What he did and said was destined to vanish, my brain rebooting to the state before the drug. Had this other me been aware of it, I might have thought that this instance of me was doomed to a sort of death. How would you feel if you knew that what you did today would be erased, and tomorrow your body — not the you of the moment — would wake up with the same memories and personality as you woke up with earlier today? Of course many SF writers have considered this as well as some philosophers. It’s just interesting to see drugs making the question more real than it has been before.


The SETI institute has a podcast called “Are we alone?”

I was interviewed for it at the Singularity Summit, this can be found in their when machines rule episode. If you just want to hear me, I start at 32:50 after a long intro explaining the Fermi paradox.

Coming up: Burning Man, Singularity Summit, Foresight Vision Weekend

Here are three events coming up that I will be involved with.

Burning Man of course starts next weekend and consumes much of my time. While I’m not doing any bold new art project this year, maintaining my 3 main ones is plenty of work, as is the foolishly taken on job of village organizer and power grid coordinator. I must admit I often look back fondly on my first Burning Man, where we just arrived and were effectively spectators. But you only get to do that once.

Right after Burning Man, the Singularity Institute is hosting a Singularity Summit — a futurist conference with a good rack of speakers. Last year they did it as a free event at Stanford and got a giant crowd (because it was free there were no-shows, however, making it sad that some were turned away.) This year there is a small fee, and it’s at the Palace of Fine Arts in San Francisco.

On the first weekend of November, we at the Foresight Institute will host our 2007 Vision Weekend doing half of it in “unconference” style — much more ad-hoc. It will be at Yahoo HQ in Sunnyvale, thanks to their generous sponsorship. More details on that to come.

The 3D Street with HDTV

If you go to the cities of Asia, one thing I find striking is how much more three-dimensional their urban streets are. By this I mean that you will regularly find busy retail shops and services on the higher floors of ordinary buildings, and even in the basement. Even in our business areas, above the ground floor is usually offices at most, rarely depending on walk-by traffic. There it's commonplace. I remember being in Hong Kong and asking natives to pick a restaurant for lunch. It was not unknown to just get into an otherwise unmarked elevator and go down or up to a bustling floor or sub-ground level to find the food.

Here we really like to see things from the street. A stairway up is uninviting. People want to see inside a restaurant as they walk by, to see how it looks, how busy it is, and even what the other patrons look like. I don't know why the non-main level shops can do so well in places like Japan and China, it may just be a necessity due to the much higher urban density.

However, I have wondered if the recent drop in price for HDTV panels and cameras could make a change. Instead of a stairway with sign, imagine a closed circuit HDTV panel or two at the entrance, showing you a live view of what's up there. For a little extra money, the camera could pan. While I think a live camera is best, obviously some shops would prefer to run something more akin to an advertisement. In all cases, I would hope sound was kept to a minimum, and the screens should have a reliable light sensor and clock to know how bright to be so they are not distracting at night. Some places, such as bars and restaurants, might elect to also put their camera online as a webcam, so people can look from home to see if a restaurant is hopping or not.

(There might be some temptation to run recorded video of busy times, but I think that would annoy patrons more than it would win them, once they went up the stairs. Who wants to go to a restaurant that has to fake it?)

While this idea could start with traditional urban streets, where each building has its own stairway or elevator up to higher floors, one could imagine a neoclassical urban street which is really an urban strip mall managed as a unit. In such a building, each ground floor tenant would have to devote a section of their window to show the live view of their neighbour above. Though patrons would then have to head to the actual stair or elevator to get up to the second floor. It's hard to say whether it might make more sense to put the panels in a cluster by the stairs rather than with each ground level shop.

This principle could also apply to the mini-malls found in the basements of tall buildings. However, again I fear the screens going overboard and trying to be too flashy. I really think a "window" that lets you see a live scene you can't otherwise see is in the interests of all, while yet another square foot with ads is not.

Medical stories making it feel like the 21st century

High posting volume today. I just find it remarkable that in the last 2 weeks I’ve seen several incredible breakthrough level stories on health and life extension.

Today sees this story on understanding how caloric restriction works which will appear in Nature. We’ve been wondering about this for a while, obviously I’m not the sort of person who would have an easy time following caloric restriction. Some people have wondered if Resveratrol might mimic the actions of CR, but this shows we’re coming to a much deeper understanding of it.

Yesterday I learned that we have misunderstood death and in particular how to revive the recently dead. New research suggests that when the blood stops flowing, the cells go into a hibernation that might last for hours. They don’t die after 4 minutes of ischemia the way people have commonly thought. In fact, this theory suggests, the thing that kills patients we attempt to revive is the sudden inflow of oxygen we provide for revival. It seems to trigger a sort of “bug” in the [[w:mitochondria], triggering apoptosis. As we learn to restore oxygen in a way that doesn’t do this, especially at cool temperatures, it may be possible to revive the “dead” an hour later, which has all sorts of marvelous potential for both emergency care and cryonics.

Last week we were told of an absolutely astounding new drug which treats all sorts of genetic disorders. A pill curing all those things sounds like a miracle. It works by altering the ribosome so that it ignores certain errors in the DNA which normally make it abort, causing complete absence of an important protein. If the errors are minor, the slightly misconstructed protein is still able to do its job. As an analogy, this is like having parity memory and disabling the parity check in a computer. It turns out parity errors are quite rare, so most of the time this works fine. When a parity check fails the whole computer often aborts, which is the right move in the global scale — you don’t want to risk corrupting data or not knowing of problems — but in a human being, aborting the entire person due to a parity check is a bit extreme from the individualistic point of view.

These weren’t even all the big medical stories of the past week. There have been cancer treatments and more, along with a supercomputer approaching the power of a mouse brain.

Local Depot

In yesterday’s article on future shopping I outlined a concept I called a local depot. I want to expand more on that concept. The basic idea is web shopping from an urban warehouse complex with fast delivery not to your home, but to a depot within walking distance of your home, where you can pick up items on your own schedule that you bought at big-box store prices within hours. A nearby store that, with a short delay, has everything, cheap.

In some ways it bears a resemblance to the failed company Webvan. Webvan did home delivery and initially presented itself as a grocery store. I think it failed in part because groceries are still not something people feel ready to buy online, and in part for being too early. Home delivery, because people like — or in many cases need — to be home for it may actually be inferior to delivery to a depot within walking distance where items can be picked up on a flexible schedule.

Webvan’s long term plan did involve, I was told, setting up giant warehouse centers with many suppliers, not just Webvan itself. In such a system the various online suppliers sit in a giant warehouse area, and a network of conveyor belts runs through all the warehouses and to the loading dock. Barcodes on the packages direct them to the right delivery truck. Each vendor simply has to put delivery code sticker on the item, and place it on the conveyor belt. It would then, in my vision, go onto a truck that within 1 to 2 hours would deliver all the packages to the right neighbourhood local depot.  read more »

Urban retail neighbourhood of the future

Towns lament the coming of big-box stores like Wal-Mart and Costco. Their cut-rate competition changes the nature of shopping and shopping neighbourhoods. To stop it, towns sometimes block the arrival of such stores. Now web competition is changing the landscape even more. But our shopping areas are still “designed” with the old thinking in mind. Some of them are being “redesigned” the hard way by market forces. Can we get what we really want?

We must realize that it isn’t Wal-Mart who closes down the mom’n’pop store. It’s the ordinary people, who used to shop at it and switch to Wal-Mart who close it down. They have a choice, and indeed in some areas such stores survive.  read more »

Elliptical Racer for toddlers and VR for children

When I watch the boundless energy of young children, and their parents’ frustration over it, I wonder how high-tech will alter how children are raised in the next few decades. Of course already TV, and now computers play a large role, and it seems very few toys don’t talk or move on their own.

But I’ve also realized that children, both from a sense of play and due to youthful simplicity, will tolerate some technologies far before adults will. For example, making an AI to pass the Turing Test for children may be much, much simpler than making one that can fool an adult. As such, we may start to see simple AIs meant for interacting with, occupying the minds of and educating children long before we find them usable as adults.

Another technology that young children might well tolerate sooner is virtual reality. We might hate the cartoonish graphics and un-natural interfaces of today’s VRs but children don’t know the interfaces aren’t natural — they will learn any interface — and they love cartoon worlds.  read more »

Updating the Turing Test

Alan Turing proposed a simple test for machine intelligence. Based on a parlour game where players try to tell if a hidden person is a man or a woman just by passing notes, he suggested we define a computer as intelligent if people can’t tell it from a human being through conversations with both over a teletype.

While this seemed like a great test (for those who accept that external equivalence is sufficient) in fact to the surprise of many people, computers passed this test long ago with ordinary, untrained examiners. Today there has been an implicit extension of the test, that the computer must be able to fool a trained examiner, typically an AI researcher or expert in brain sciences or both.

I am going to propose updating it further, in two steps. Turing proposed his test perhaps because at the time, computer speech synthesis did not exist, and video was in the distant future. He probably didn’t imagine that we would solve the problems of speech well before we got handles on actual thought. Today a computer can, with a bit of care in programming inflections and such into the speech, sound very much like a human, and we’re much closer to making that perfect than we are to getting a Turing-level intelligence. Speech recognition is a bit behind, but also getting closer.

So my first updated proposal is to cast aside the teletype, and make it be a phone conversation. It must be impossible to tell the computer from another human over the phone or an even higher fidelity audio channel.

The second update is to add video. We’re not as far along here, but again we see more progress, both in the generation of digital images of people, and in video processing for object recognition, face-reading and the like. The next stage requires the computer to be impossible to tell from a human in a high-fidelity video call. Perhaps with 3-D goggles it might even be a 3-D virtual reality experience.

A third potential update is further away, requiring a fully realistic android body. In this case, however, we don’t wish to constrain the designers too much, so the tester would probably not get to touch the body, or weigh it, or test if it can eat, or stay away from a charging station for days etc. What we’re testing here is the being’s “presence” — fluidity of motion, body language and so on. I’m not sure we need this test as we can do these things in the high fidelity video call too.

Why these updates, which may appear to divert from the “purity” of the text conversation? For one, things like body language, nuance of voice and facial patterns are a large part of human communication and intelligence, so to truly accept that we have a being of human level intelligence we would want to include them.

Secondly, however, passing this test is far more convincing to the general public. While the public is not very sophisticated and thus can even be fooled by an instant messaging chatbot, the feeling of equivalence will be much stronger when more senses are involved. I believe, for example, that it takes a much more sophisticated AI to trick even an unskilled human if presented through video, and not simply because of the problems of rendering realistic video. It’s because these communications channels are important, and in some cases felt more than they are examined. The public will understand this form of turing test better, and more will accept the consequences of declaring a being as having passed it — which might include giving it rights, for example.

Though yes, the final test should still require a skilled tester.


3-D printing is getting cheaper. This week I saw a story about producing a hacked together 3-D printer that could print in unusual cheap materials like play-doh and chocolate frosting for $2,000. Soon, another 3-D technology will get cheap — the 3-D body scan.

I predict soon we’ll see 3-D scanning and reproduction become a consumer medium. It might be common to be able to pop into a shop and get a quick scan and lifelike statue of yourself, a pet or any object. Professional photographers will get them — it will become common, perhaps, to have a 3-D scan done of the happy couple at the wedding, with resultant statue. Indeed, soon we’ll see this before the wedding, where the couple on the wedding cake are detailed statues of the bridge and groom.

And let’s not forget baby “portraits” (though many of today’s scanning processes require the subject to be still.) At least small children can be immortalized. Strictly this requires the scanners to get cheap first, because you can send the statue back later in the main from a central 3-D printer if it’s not made of food.

The scanners may never become easily portable, since they need to scan from all sides or rotate the subject, but they will also eventually become used by serious amateur photographers, and posing for a portrait may commonly also include a statue, or at least a 3-d model in a computer (with textures and colours added) that you can spin around.

This will create a market for software that can take 3-D scans and easily make you look better. Thinner, of course, but perhaps even more muscular or with better posture. Many of us would be a bit shocked to see ourselves in 3-D, since few of us are models. As we’ll quickly have more statues than we know what to do with, we may get more interested in the computer models, or in ephemeral materials (like frosting) for these photostatuary.

This was all possible long ago if you could hire an artist, and many a noble had a bust of himself in the drawing room. But what will happen when it gets democratized?

A real life Newcomb's Paraodox

This week I participated in this thread on Newcomb’s Paraodox which was noted on BoingBoing.

The paradox:

A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But there’s a catch.

The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty. Furthermore, he has run this experiment with 999 people before, and has been right every time.

What do you do?

A short version of my answer: The parodox confuses people because it stipulates you are a highly predictable being to the alien, then asks you to make a choice. But in fact you don’t make a choice, you are a choice. Your choice derives from who you are, not the logic you go through before the alien. The alien’s power dictates you already either are or aren’t the sort of person who picks one box or two, and in fact the alien is the one who made the choice based on that — you just imagine you could do differently than predicted.

Those who argue that since the money is already in the boxes, you should always take both miss the point of the paradox. That view is logically correct, but those who hold that view will not become millionaires, and this was set by the fact they hold the view. It isn’t that there’s no way the contents of the boxes can change because of your choice, it’s that there isn’t a million there if you’re going to think that way.

Of course people don’t like that premise of predictability and thus, as you will see in the thread, get very involved in the problem.

In thinking about this, it came to me that the alien is not so hypothetical. As you may know from reading this blog, I was once administered Versed, a sedative that also blocks your ability to form long term memories. I remember the injection, but not the things I said and did afterwards.

In my experiment we recruit subjects to test the paradox. They come in and an IV drip is installed, though they are not told about Versed. (Some people are not completely affected by Versed but assume our subjects are.) We ask subjects to give a deliberated answer, not to just try to be random, flip a coin or whatever.

So we administer the drug and present the problem, and see what you do. The boxes are both empty — you won’t remember that we cheated you. We do it a few times if necessary to see how consistent you are. I expect that most people would be highly consistent, but I think it would be a very interesting thing to research! If a few are not consistent, I suspect they may be deliberately being random, but again it would be interesting to find out why.

We videotape the final session, where there is money in the boxes. (Probably not a million, we can’t quite afford that.) Hypothetically, it would be even better to find another drug that has the same sedative effects of Versed so you can’t tell it apart and don’t reason differently under it, but which allows you to remember the final session — the one where, I suspect, we almost invariably get it right.

Each time you do it, however, you think you’re doing it for the first time. However, at first you probably (and correctly) won’t want to believe in our amazing predictive powers. There is no such alien, after all. That’s where it becomes important to videotape the last session or even better, have a way to let you remember it. Then we can have auditors you trust completely audit the experimenter’s remarkable accuracy (on the final round.) We don’t really have to lie to the auditors, they can know how we do it. We just need a way for them to swear truthfully that on the final round, we are very, very accurate, without conveying to the subject that there are early, unremembered rounds where we are not accurate. Alas, we can’t do that for the initial subjects — another reason we can’t put a million in.

Still, I suspect that most people would be fairly predictable and that many would find this extremely disturbing. We don’t like determinism in any form. Certainly there are many choices that we imagine as choices but which are very predictable. Unless you are bi, you might imagine you are choosing the sex of your sexual partners — that you could, if it were important, choose differently — but in fact you always choose the same.

What I think is that having your choices be inherent in your makeup is not necessarily a contradiction to the concept of free will. You have a will, and you are free to exercise it, but in many cases that will is more a statement about who you are than what you’re thinking at the time. The will was exercised in the past, in making you the sort of mind you are. It’s still your will, your choices. In the same way I think that entirely deterministic computers can also make choices and have free will. Yes, their choices are entirely the result of their makeup. But if they rate being an “actor” then the choices are theirs, even if the makeup’s initial conditions came from a creator. We are created by our parents and environment (and some think by a deity) but that’s just the initial conditions. Quickly we become something unto ourselves, even if there is only one way we could have done that. We are not un-free, we just are what we are.

No long telomeres for you

In the 90s, when I had more money, I did some angel investing. One of the companies I invested in, Sierra Sciences was started by an old friend and business associate, Dan Fylstra, who had also founded Personal Software/VisiCorp, the company that sold VisiCalc.

Sierra Sciences was also founded by Bill Andrews, who had done important work on Telomeres at Geron. Together, we hoped to follow promising leads on now to safely lengthen the telomere.

Telomeres are strands on the end of chromosomes. Each time a chromosome is duplicated, they shorten, acting like a decremating “counter.” After so many duplications (50 to 60) the telomere is too short and the cell can’t divide. That gives a fresh gamete 2^50 cells to produce, which is a ton, but of course we are the result of highly specialized duplication so it turns out to not be enough. Telomeres are in part a defence against cancer. If a cancer forms, and starts duplicatating like crazy, it hits the limit of the telomere and stops — unless it has found a way to generate Telomerase, the enzyme that resets the counter. We need that enzyme in order to make babies, and certain types of immune cells and IIRC marrow, but in most of our cells it is repressed, in order to stop cancer.

They’ve known how to turn on telomerase and make immortal cell lines for a while, but this would increase the risk of cancer. The trick is to lengthen them just a bit. This would, in theory, give you some of the healing ability of a baby. Old people’s skin wounds heal very slowly because their cells are all divided out — they can’t produce endless new cells quickly.

A study a few years ago showed that people with naturally longer telomeres (just a bit longer) live about 4 years longer on average than those with shorter ones. That’s a big difference, and we hoped even a larger effect could be generated. We identified the sites that repress telomerase and found antagonists for the chemicals binding those sites.

But, after several years and a lot of money, have not yet found a drug to make the magic happen. The major investors have decided not to go forward. The company is for sale. While the investors won’t make much, if anything from it, I hope it is bought not just for the lab equipment but by somebody interested in carrying on the research. Most of the investors not only knew that anti-aging drugs would be very lucrative, they sort of hoped to be on the customer list someday.

It generated some interesting issues. Getting approval for such drugs would be a hard slog. It was debated that an animal drug be developed, as people would pay a lot for longer lived pets and racehorses. I was scared of this, knowing that humans would take the animal drug in desperation — with possible scary results due to lack of testing and refinement. The other hope was for a topical skin cream that really made skin be younger, not just look younger. This would be medically valuable and of course sell a lot for cosmetic applications. But it’s not to be for now.

Wanna buy a biotech company cheap? Check out the web site.

Sorry, SETI, ET not phoning

Yesterday I attended Seth Shostak’s standard talk about the work at the SETI institute. I know others from the institute such as Frank Drake and Jill Tarter who inspired the Jodi Foster character in the movie Contact. I wanted to see what was new. (Once, by the way, I went on an eclipse cruise where Drake and Tarter were passengers. On a dive boat, somebody talked about the movie Contact, so I told them that Tarter was on the ship, and she had inspired the character. The woman was amazed and asked, “You mean she met an extraterrestrial?”)

I have a lot of sympathy for this cause, for the search is important and the payoff extremely so. But I must report a serious lack of optimism. Read on to find out why…  read more »

On the need for self-replicating nanotech assemblers

In recent times, I and my colleagues at the Foresight Nanotech Institute have moved towards discouraging the idea of self-replicating machines as part of molecular nanotech. Eric Drexler, founder of the institute, described these machines in his seminal work “Engines of Creation,” while also warning about the major dangers that could result from that approach.

Recently, dining with Ray Kurzweil on the release of his new book The Singularity Is Near : When Humans Transcend Biology, he expressed the concern that the move away from self-replicating assemblers was largely political, and they would still be needed as a defence against malevolent self-replicating nanopathogens.

I understand the cynicism here, because the political case is compelling. Self-replicators are frightening, especially to people who get their introduction to them via fiction like Michael Chrichton’s “Prey.” But in fact we were frightened of the risks from the start. Self replication is an obvious model to present, both when first thinking about nanomachines, and in showing the parallels between them and living cells, which are of course self-replicating nanomachines.

The movement away from them however, has solid engineering reasons behind it, as well as safety reasons. Life has not always picked the most efficient path to a result, just the one that is sufficient to outcompete the others. In fact, red blood cells are not self-replicating. Instead, the marrow contains the engines that make red blood cells and send them out into the body to do their simple job.

Read on  read more »

Syndicate content