Submitted by brad on Sun, 2005-09-25 12:05.
In recent times, I and my colleagues at the Foresight Nanotech Institute have moved towards discouraging the idea of self-replicating machines as part of molecular nanotech. Eric Drexler, founder of the institute, described these machines in his seminal work “Engines of Creation,” while also warning about the major dangers that could result from that approach.
Recently, dining with Ray Kurzweil on the release of his new book The Singularity Is Near : When Humans Transcend Biology, he expressed the concern that the move away from self-replicating assemblers was largely political, and they would still be needed as a defence against malevolent self-replicating nanopathogens.
I understand the cynicism here, because the political case is compelling. Self-replicators are frightening, especially to people who get their introduction to them via fiction like Michael Chrichton’s “Prey.” But in fact we were frightened of the risks from the start. Self replication is an obvious model to present, both when first thinking about nanomachines, and in showing the parallels between them and living cells, which are of course self-replicating nanomachines.
The movement away from them however, has solid engineering reasons behind it, as well as safety reasons. Life has not always picked the most efficient path to a result, just the one that is sufficient to outcompete the others. In fact, red blood cells are not self-replicating. Instead, the marrow contains the engines that make red blood cells and send them out into the body to do their simple job.
Read on read more »
Submitted by brad on Mon, 2005-05-30 17:36.
There’s a lot of talk about the coming threat of Avian H5N1 flu, how it might kill many millions, far beyond the 1918 flu and others, because of how much people travel in the modern world. Others worry about bioterrorism.
Plans are underway to deal with it, but are they truly thinking about some of the tools the modern world has that it didn’t have in 1918 which might make up for our added risks? We have the internet, and a lot of dot-coms, both living and dead, created all sorts of interesting tools for living in the world without having to leave your house.
In the event of an outbreak, we’ll have limited vaccine available, if there’s much at all. Everybody will want it, and society will have to prioritize who gets what. While some choices are obvious — medical staff and other emergency crews — there may be other ideas worth considering.
Today, a significant fraction of the population can work from home, with phone, computer and internet. The economy need not shut down just because people must avoid congregating. Plans should be made, even at companies that prefer not to allow telecommuting, to be able to switch to it in an emergency.
Schools might have to close but education need not stop. We can easily devote TV channels in each area to basic curriculum for each grade. Individual schools can modify that for students who have internet access or even just a DVD player or VCR. For example, teachers could teach their class to a camera, and computers can quickly burn DVDs for distribution. Students can watch the DVDs, pause them and phone questions to the teacher. (However, ideally most students are able to make use of the live lectures on TV, and can phone their particular teacher, or chat online, to ask questions.) Parents, stuck at home would also help their children more.
Delivery people (USPS, UPS etc.) would be high in line for vaccination to keep goods flowing to people in their homes. You can of course buy almost anything online already. Systems like Webvan, for efficient grocery ordering and delivery could be brought back up, with extra vaccinated delivery drivers making rounds of every street.
Of course not everybody has a computer, but that need not be a problem. With so many people at home, volunteers would come forward who did have broadband. They would take calls from those who do not have computers and do their computer tasks for them, making sure they got in their orders for food and other supplies. Of course all food handlers would need to be vaccinated and use more sterile procedures. read more »
Submitted by brad on Mon, 2005-03-21 07:26.
Here John Dunn suggests sending an AI to negotiate with any aliens we discover via SETI.
This raises an interesting question. If SETI worked, and we got a signal from an alien intelligence, and the signal was understood to be a description of a computer architecture and then a big long, and undecipherably complex computer program -- possibly an AI -- could we dare run it?
Oh, it would be so tempting to run it. Contact with an alien species, possible untold wealths of knowledge, solutions to all our problems and more. But if it can contain those things it's probably smarter than us. And as an alien, it has its own goals which are alien to ours.
AI pundit Eliezer Yudkowsky spends much of his time warning about the dangers of even a human-designed AI, and has developed a convincing argument that it's next to impossible to keep something much smarter than you locked up in a box no matter how much you resolve to do so. It's probable we couldn't keep the alien AI in a box either as it does a superhumanly good job of convincing us just what wonderful things it could do for humanity (or just the people with keys to the box) if released.
Indeed, a good strategy for a growth-oriented AI creature would be to broadcast itself out at lightspeed, in the hope that other creatures would run it, and it could then use their resources to build more computers on which to run itself and transmitters with which to transmit itself. It might even do that at the same time as providing wonderful benefits for the host culture, or of course it could toss them by the wayside as it saw fit.
Remind you of Pandora? In Contact by Carl Sagan, the aliens send plans for an FTL transporter, which presumably is a physical device with no AI, so they are able to build it. They debate building even that, worrying if it's a weapon, but the debate would be much more on an AI, and probably end up in the negative.
Submitted by brad on Mon, 2004-09-20 09:23.
Vernor Vinge (Vin-GEE)(whose 1993 novel "A Fire Upon the Deep" I published in hypertext form) coined the term "singularity" to refer to a future social and technological shift so profound and vast that those who come before it are actually incapable of understanding it.
This is an important concept, one that plays out in his novels and the writings of many others, and it needs a term. But this term has ended up not being ideal.
Scientists already have a meaning for the word of course, but it is more specific. It refers to a point where a function is undefined. For example, dividing 1/x has a singularity at 0, since 1/0 is undefined. More to the point, 1/x also increases exponentially towards infinity as you approach 0. These concepts of rapid acceleration, and the inability to extrapolate past a singularity inspired the metaphor Vinge was trying to convey.
Other forms of singularity can include any sharp corner in a function (where the derivative is undefined) and in areas within a black hole (where are normal equations of physics are undefined.) However, the non-scientific public does not understand these mathematical meanings, and thus don't quickly grasp even the metaphor.
An example of such a metaphorical singularity would be the creation of language. Pre-verbal proto-humans simply can't understand the beauty of poetry at all, no matter how much time you would have to explain it.
The "Vinge" singularity deoes not involve a discontinuity or undefined point in history. Instead, the path is continuous. You can't easily point to a specific second and say "There is the singularity where language capable of Shakespeare arose."
So the term is wrong for those who understand the mathematical meaning and meaningless to those who don't. We should seek a better term.
I welcome suggestions from readers. I think the important thing to convey is perhaps the metaphor of the "blind corner" -- a sharp, but not impossibly sharp turn which you can't see around until you get there. The ideal metaphor should also convey the acceleration of change which causes the phenomenon, and this does not. That is more akin to flying off a cliff, or the planes that turn to submarines in the new "Sky Captain" movie.
Submitted by brad on Wed, 2004-07-21 10:18.
Back in June I did a short experiment nomading. A trip that was just
a change of home but not a vacation. My sister was going to Rome
to shoot a war documentary for a couple of weeks, so we flew to
She had the main things I needed. A house, a car, and of course a
DSL connection. But could I get my home environment? I brought
a wireless access point, and the ATA for my Vonage phone account.
The Vonage account has both a Silicon Valley number and a Toronto
number, so it moved quite easily. People could still call me on
the regular numbers, and I could make calls without concern for the
cost. I borrowed a local cell phone since my efforts to get my
own spare phone unlocked and with a local NAM didn't work out.
Also vital for me was a big screen. I'm used to a very nice 1600 x 1200
21" screen and that's not portable. I was able to borrow a 19". My
servers at home kept running and in fact I did a lot of things
on them remotely 2500 miles away. At one point the DSL flaked out
and I had to find a friend to come in and reboot it, but otherwise
that was fine.
Toronto is a town I've lived in, so this is cheating, but I haven't
really lived there since I was young, so it's halfway to a foreign
town in terms of knowing my way to things. At your own base, you
learn a lot about your area. You learn all the traffic patterns, and
you know where all the shops are that have the things you want at
the prices you like. It takes a lot of time to duplicate that.
I've also learned that as I've gotten older I've gotten too dependent
on stuff. I think back to the first time I moved cross country, putting
everything in the back of my hatchback and feeling great. The last time,
I used 20 linear feet of Transport truck.
read more »
Submitted by brad on Thu, 2004-06-17 03:55.
Emory University scientists, taking one species of vole that is one of the extremely rare animals to be actually monogamous, found a gene to boost the effect of Vasopressin, one of the love hormones. Inserting this gene into other voles made them more socially monogamous.
I had heard of this before, and there has been science fiction about couples taking love drugs, but this story made me wonder about how people might try to alter the concept of marriage.
Imagine there was a gene therapy which would improve the chances that you would remain in love with the one you currently love. Might couples want to take it when getting married? (Or, more practically, after a few years of test marriage and before children are begun.)
And more to the point, if this became popular, might there arise pressure to do so, even for those who don't particuarly want it?
One can imagine injecting the virus to deliver the gene at the wedding, truly sealing the bonds of love. (It's unlikely that the romantic idea of transmitting the virus in the first marital kiss would be a good idea.)
But what if it starts coming down to "Honey, why won't you take the gene therapy? Don't you love me enough? I'll take it for you!"
How will we answer that?
Submitted by brad on Thu, 2004-04-29 12:54.
In 1965, Gordon Moore of intel published a paper suggesting that the number of transistors on a chip would double every year. Later, it was revised to suggest a number of 18 months, which became true in part due to marketing pressure to meet the law.
Recently, Intel revised the law to set the time at two years.
So this suggests a new law, that the time period in Moore's Law doubles about every 40 years.