On the need for self-replicating nanotech assemblers

In recent times, I and my colleagues at the Foresight Nanotech Institute have moved towards discouraging the idea of self-replicating machines as part of molecular nanotech. Eric Drexler, founder of the institute, described these machines in his seminal work "Engines of Creation," while also warning about the major dangers that could result from that approach.

Recently, dining with Ray Kurzweil on the release of his new book The Singularity Is Near : When Humans Transcend Biology, he expressed the concern that the move away from self-replicating assemblers was largely political, and they would still be needed as a defence against malevolent self-replicating nanopathogens.

I understand the cynicism here, because the political case is compelling. Self-replicators are frightening, especially to people who get their introduction to them via fiction like Michael Chrichton's "Prey." But in fact we were frightened of the risks from the start. Self replication is an obvious model to present, both when first thinking about nanomachines, and in showing the parallels between them and living cells, which are of course self-replicating nanomachines.

The movement away from them however, has solid engineering reasons behind it, as well as safety reasons. Life has not always picked the most efficient path to a result, just the one that is sufficient to outcompete the others. In fact, red blood cells are not self-replicating. Instead, the marrow contains the engines that make red blood cells and send them out into the body to do their simple job.

Read on

For a given nanomachine, especially a self-replicating one, many components are needed. It must be able to obtain, possibly store, and use the fuel to power its operations. It must have the "tools" to perform its functions, such as detectors that look for proteins, sites to release chemicals etc. In a self-replicator, it must also have all the tools needed to build a copy of itself, and it must have a copy of its own design (genetic code.) In natural life, the cells keep a copy of the code of the engire organism, along with instructions on how to interpret just the right parts of it to specialize. It's not that efficient.

A special purpose nanomachine, like the red blood cell, does not have to contain neither the considerably more complex mechanisms needed to make a copy of itself, nor the design code. It may even be designed to live for a short time on stored fuel and then die, eliminating the need for a means to capture fuel. It can be simple and easy to make.

Let's say we need a machine, a nanophage, to attack some sort of new problem (biological or nano.) Like an immune system, we would identify the new pathogen, and prepare a design for a phage to seek-and-neutralize them. Then we would want lots of these phages, and fast.

When building a self-replicator, the duplication cycle will consist of T-w, the time to build the working parts, and T-r, the time to build the self-replication parts. There will be some overlap in some cases, but by and large, the total time to build the self replicator will be T-make = T-w + T-r. (I'm ignoring what you can parallelize for the moment.) The big virtue of self-replicators is that they can grow exponentially to address a problem, but this still requires T-make per generation, and the time to make a billion replicators from one is still 30 iterations of T-make. If T-make is an hour, that's 30 hours.

The alternative is to have an expandable factory, typically a set of manipulators on a movable substrate. The factory, like bone marrow, is around all the time, with billions of constructors in it. So it can make those first billion in just half an hour (that's quite a win against a self-replicating enemy.) On the other hand, if you need many orders of magnitude more phages than you can make in one factory cycle, the self-assemblers will eventually go way past you. (Note: I'm assuing T-w and T-r are about even at 30 minutes each but in fact, T-w is probably much smaller.)

In this emergency circumstance, you need a factory that can expand its capacity. In this case, the billion constructors make another billion constructors (in less than T-r, because a constructor, relying on its substrate and its external master control computer, is simpler than a full self-replication engine.) The number of constructors in the factory can grow exponentially with a shorter generation time. However, the factory is not self-replicating -- the control mechanisms, the design code to build both the phage and the factory are not duplicated, nor are some of the macro components except rarely.

The factory has economies of scale the individual self-replicator can't have, and so it is able to produces phages far faster and cheaper. The only issue is travel time, as the phages must now travel to the battle zone, while the self-replicators are duplicating there. But this is a constant and hopefully minor. Factories would be scattered around the body and the environment, to minimize this.

Also note:

  • Though tiny, factories are large enough to use radio, which nanomachines are not, being much smaller than the radio wavelengths. They can thus quickly coordinate, and adapt to information learned in the battle immediately. If a phage discovers a better strategy and can get back to any factory with the news, suddenly the entire system can move to the new strategy.

  • At the factory, you also have pre-planned supplies of raw materials and fuel which may not be present for machines you want to replicate "in the wild."

  • Factory constructors are actually more specialized, not machines able to create an entire phage or other constructor. Rather, like an assembly line, the factory either moves itself over the under-construction phages, bringing all the tools and materials needed in sequence.

  • Of course there would be super-factories able to make factories, but these would remain under human control, and not autonomous self-replicators.

  • The factory made phages immediately begin the attack. Self-replicators may not be able to attack while they are busy replicating. If they can, that's extra complexity. This dampens the exponential growth a bit.

The factories would be sized to deal with typical threats without growing the factory -- that's an emergency strategy. In these cases, the response of the factories (massive numbers of phages before even the first generation of a single self replicator) is vastly, vastly superior in dealing with an initial small incursion.

There are hybrid strategies. For example, generalized self-replicator components could be pre-manufactured to be inserted more quickly into self-replicating phages, cutting down the dup time as long as the supply of these holds up.

Comments

Actually, I think that a more compelling argument against nanoreplicators (I'm no expert) is space, not time. If you build a billion, you overcome the time problem, but you replicate the factory one billion times.

A separate factory should allow a more efficient nanobot.

Furthermore, a separate factory should enable a better (and larger) factory that is also easier to design, given today's technology.

So I agree that there are good reasons to avoid nanoreplication, but argue that the problem is ease of design second, and wasted resources through unnecessary duplication first.

OTOH, you could design different generations of nanobots, with the early versions containing factories, and later generations containing purely the lean mean efficient machines.

The idea that a nano replicator will be under the control of a larger system has other interesting facets.

To be useful, we effectively need a nano replicator to be both a "universal constructor" (capable of making any molecule with its appendages) and a "universal computer" (capable of doing the thinking necessary to build something).

Unfortunately it's very difficult to build something so small which can think as well as do tasks!

Some models call for the thinking part to be a single separate macroscopic computer which sends commands to the machines which are slaved to it. Eventually the computer would need to control a very large number of replicators indeed, but by then a new control centre could possibly be build. Humans would be in control of the control centres.

This avoids the problem of the "stray infecting pathogen" because such a machine would be helpless without the control centre. Of course, replicators such as viruses do exist, so we can't rule out an entirely self contained machine, but I suspect it would be a lot less versatile, larger, bulkier, and slower to replicate than machines which don't need to think for themselves. The smaller, faster dumb robots would probably wipe them out.

Add new comment