HBO released a new version of “Westworld” based on the old movie about a robot-based western theme park. The show hasn’t excited me yet — it repeats many of the old tropes on robots/AI becoming aware — but I’m interested in the same thing the original talked about — simulated experiences for entertainment.
The new show misses what’s changed since the original. I think it’s more likely they will build a world like this with a combination of VR, AI and specialty remotely controlled actuators rather than with independent self-contained robots.
One can understand the appeal of presenting the simulation in a mostly real environment. But the advantages of the VR experience are many. In particular, with the top-quality, retinal resolution light-field VR we hope to see in the future, the big advantage is you don’t need to make the physical things look real. You will have synthetic bodies, but they only have to feel right, and only just where you touch them. They don’t have to look right. In particular, they can have cables coming out of them connecting them to external computing and power. You don’t see the cables, nor the other manipulators that are keeping the cables out of your way (even briefly unplugging them) as you and they move.
This is important to get data to the devices — they are not robots as their control logic is elsewhere, though we will call them robots — but even more important for power. Perhaps the most science fictional thing about most TV robots is that they can run for days on internal power. That’s actually very hard.
The VR has to be much better than we have today, but it’s not as much of a leap as the robots in the show. It needs to be at full retinal resolution (though only in the spot your eyes are looking) and it needs to be able to simulate the “light field” which means making the light from different distances converge correctly so you focus your eyes at those distances. It has to be lightweight enough that you forget you have it on. It has to have an amazing frame-rate and accuracy, and we are years from that. It would be nice if it were also untethered, but the option is also open for a tether which is suspended from the ceiling and constantly moved by manipulators so you never feel its weight or encounter it with your arms. (That might include short disconnections.) However, a tracking laser combined with wireless power could also do the trick to give us full bandwidth and full power without weight.
It’s probably not possible to let you touch the area around your eyes and not feel a headset, but add a little SF magic and it might be reduced to feeling like a pair of glasses.
The advantages of this are huge:
You don’t have to make anything look realistic, you just need to be able to render that in VR.
You don’t even have to build things that nobody will touch, or go to, including most backgrounds and scenery.
You don’t even need to keep rooms around, if you can quickly have machines put in the props when needed before a player enters the room.
In many cases, instead of some physical objects, a very fast manipulator might be able to quickly place in your way textures and surfaces you are about to touch. For example, imagine if, instead of a wall, a machine with a few squares of wall surface quickly holds one out anywhere you’re about to touch. Instead of a door there is just a robot arm holding a handle that moves as you push and turn it.
Proven tricks in VR can get people to turn around without realizing it, letting you create vast virtual spaces in small physical ones. The spaces will be designed to match what the technology can do, of course.
You will also control the audio and cancel sounds, so your behind-the-scenes manipulations don’t need to be fully silent.
You do it all with central computers, you don’t try to fit it all inside a robot.
You can change it all up any time.
In some cases, you need the player to “play along” and remember not to do things that would break the illusion. Don’t try to run into that wall or swing from that light fixture. Most people would play along.
For a lot more money, you might some day be able to do something more like Westworld. That has its advantages too:
Of course, the player is not wearing any gear, which will improve the reality of the experience. They can touch their faces and ears.
Superb rendering and matching are not needed, nor the light field or anything else. You just need your robots to get past the uncanny valley
You can use real settings (like a remote landscape for a western) though you may have a few anachronisms. (Planes flying overhead, houses in the distance.)
The same transmitted power and laser tricks could work for the robots, but transmitting enough power to power a horse is a great deal more than enough to power a headset. All this must be kept fully hidden.
The latter experience will be made too, but it will be more static and cost a lot more money.
Yes, there will be sex
Warning: We’re going to get a bit squicky here for some folks.
Westworld is on HBO, so of course there is sex, though mostly just a more advanced vision of the classic sex robot idea. I think that VR will change sex much sooner. In fact, there is already a small VR porn industry, and even some primitive haptic devices which tie into what’s going on in the porn. I have not tried them but do not imagine them to be very sophisticated as yet, but that will change. Indeed, it will change to the point where porn of this sort becomes a substitute for prostitution, with some strong advantages over the real thing (including, of course, the questions of legality and exploitation of humans.) read more »
While a lot of press attributed the idea to him, Musk is actually restating almost exactly the well known thesis of Nick Bostrom on this topic, which has spawned much debate (some of which can be seen at the site linked.) The short precis of the thesis is as follows:
If you accept that the eventual progression of our work in creating digital (or “simulated”) worlds is to make ones that match our reality, then you probably accept that once we can do this, we will do it a whole lot, and that eventually there will be very large numbers of created digital worlds, many based on our own. If that’s true, then the probability that any particular world (including this one, of course) is the original one is vanishingly small.
Like many, I find the argument interesting, though not quite so compelling, as it contains some logical fallacies. For one, even in the “root” universe, the argument is equally compelling, but also clearly false.
I also oppose the term “simulation.” For far too many, “simulated” means “not real” or “less real.” This world is clearly “real” even if it is synthetic and based on computation. If you accept the truth of “I think therefore I am,” then you are thinking, not engaging in a simulation of thinking. (Just as AlphaGo doesn’t simulate playing Go, it plays Go.)
Better terms include “Computational” and “Snythetic” or other synonyms like “digital,” “emulated,” or “artificial.”
Leaving aside the debate over the merits of the argument, let’s assume it’s true for the moment. The biggest consequence of synthetic is that it means created. As in, “there is a creator/god” in the sense of a being who created this universe and who is in some limited way omnipotent over it and in another limited way omniscient about it. I say a limited way, because this “god” is perhaps a programmer named Martha who has a few hundred digital Earths running in her dorm room. A being perhaps (but not surely) exactly like us in her world, but with the potential ability to observe and change anything about this one.
That is a theistic view, though quite unlike typical theist doctrines. (It bears a small and bizarre similarity to Mormon theology which teaches that our god was once an ordinary being on another world who was rewarded with his own new world to be god of.)
From what we can observe, Martha doesn’t interfere overtly with this world. As such, the first conclusion is that even if you believe in this, it should not change very much about how you live your life. If you have no shot at interaction with the “parent” universe, and there is always the chance this whole thesis is false, you should go about being you as though you felt you lived in the root or “first” universe — what you might incorrectly call the “real” one.
There are some changes that are justified if you believe this, though. They are grand philosophical changes, but some apply to Elon Musk himself.
You see, Elon has made it his prime life goal to get humanity off the Earth. To stop us from being a “one planet species” which would be wiped out if something catastrophic happens here. History shows that bad things have happened naturally (like asteroid strikes) and more bad things could happen due to the works of humanity, like killer diseases or nuclear winters. As such, Elon’s goal of getting a self-supporting colony on Mars is a grand one, well worthy of being a prime life-goal for a world-shaker.
But it’s taken down a peg if you accept the synthetic world hypothesis. Now, you conclude it’s very likely that this is very much not the only cradle of humanity. That there are probably millions or billions of them. That even this one quite probably has backups taken every so often, so that even if we wipe ourselves out, all can be preserved and even restarted, if Martha wants to.
We don’t know anything about Martha’s motives, other than she appears to not do any noticeable interference. Martha might not even be remotely human, though once again, the probability is (at least from our viewpoint) that beings would create more synthetic worlds like their own than entirely different experiments. But if you believe in Martha than you believe we are not alone and that alters goals about the future of humanity.
If you want to get more extreme, there is also an issue with Mars. While again, we have no information on Martha’s goals for this project, it seems likely, unless resources are truly free, that most synthetic worlds will be just the surface of the Earth, just the interesting part in question. Running an entire galaxy or an entire universe is many orders of magnitude more costly. Sure, you might run some of them, but if you can run a trillion Earths for the cost of a couple of galaxies, that’s gotta bend things a bit.
As such, the rest of the universe truly is “simulated” in that it’s just being computed with barely enough resources to make the few photons which reach us be realistic. (Or it’s just a playback of an earlier run.) Many fans of this theory like that it explains Fermi’s famous paradox — no aliens have visited because there are not any — in this universe.
It’s hard to imagine, unless computation is totally free, that there would not be any “optimization” of the computation. Now, at the extreme, this would mean the parts of your house that nobody is looking at would be computed at a lower resolution, and that indeed, if a tree fell in the forest and nobody was there to hear it, it truly would not make a sound in a full way. That’s very philosophically spooky, but less spooky is the idea that until we went to it, Mars the planet would not even be “booted up” into our universe. When probes arrived, it might have been fully started, but more likely only where the probes went — the rest would just be a recorded copy of the original Mars, presuming there is such a thing in Martha’s world, as the whole sky would be.
As such, it would mean going to a place that only “fully exists” (which is to say is being computed at full resolution) because we went there. Somewhat less satisfying.
Still worth going?
People imagining the idea of a synthetic, computed Earth do like to speculate about the motives of its creation. If Martha is just like us, then they probably have rules and ethics about doing this. There are huge ethical questions about all the suffering and evil that comes with creating a universe. One rule I’ve imagined is that the creator really has some duties to the people inside. Those might include having a heaven of some sorts, or even letting people graduate up to the parent universe and gaining rights there. The most impressive might even get to chat with Martha, though she only has time for a few. Perhaps somebody who does something truly great, like taking humanity off-planet, gets some reward for it. We can suppose this because we might do something like that if we were making these computed places. But we really have no evidence for any of that. Some would argue there is almost nothing ethical about creating a world with so much misery and keeping the inhabitants in the dark about the reality to boot. At least by our standards — not theirs.
Is there a root?
One popular theme is to suggest that Martha’s universe is also synthetic, and there is another creator above her. I describe this by saying, “It’s turtles all the way up.” Nobody can truly be sure they are in the root of the tree.
This is particularly interesting if you speculate that the rules of our universe, when we finally learn them in depth, will show that computation lies at the bottom of everything. This has often been speculated, and most of the quests for a unified “theory-o’-everything” tend to try to express the rules in simpler and simpler mathematics. People care about that because for now, this theory is based on the idea that computation is being used to simulate the physics of a “real” universe, one made of particles and forces. We are only able to see the particles and forces, and so might conclude we aren’t digital. Particularly when emulating the activity of subatomic particles is today very expensive computationally. It makes implementing a synthetic universe at the deep level seem impossible to us. If there are deeper rules that are computational, then you can also postulate that the “root” universe could also be computational. In fact, you sort of need that, because it’s hard to figure out how to get the resources needed to have worlds within worlds if you have to implement particles based on computation done with particles which are based on computation and so on. You quickly run out. If, on the other hand, you are in a universe of computation and you create sub-worlds, you can just give those sub-worlds access to the computation substrate of your own, and it scales a lot better.
We like to believe our universe is made of particles which are physical and bounce off one another and follow analog rules. But we don’t know that’s true. The rules of our universe are a mystery to us. We don’t know where they came from, and we can’t even declare that whatever they are, any parent or root universe might not run on the same rules or a variant of them.
So should you believe this is a synthetic, computational universe — or simulation if you insist? Well, you can, but unless you are leading a mission to Mars it is not greatly productive. When the time comes — as it will — that we make our own small digital worlds that match our own for reality, doubt will of course increase, but as long as Martha remains hands-off, live your life as you always would have. One of the more spooky ideas in this theory is Last Thursdayism — the idea that there is no way to tell this world wasn’t forked from a backup last Thursday, that all of your memories before then happened to a predecessor. Perhaps that’s true, but again it doesn’t alter how you should spend your days. Indeed, it is not my goal to convince Elon to abandon his quest for Mars at all; that’s worthy even if it doesn’t help save humanity.
This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:
What you don’t hear in this video are the cries of sympathy from the crowd of thousands watching — akin to when a figure skater might fall down — or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It’s probably better to watch the DARPA official video which has a little audience reaction.
Don’t be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.
What you also don’t see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren’t a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)
We aren’t yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:
Starting in a car, drive it down a simple course with a few turns and park it by a door.
Get out of the car — one of the harder tasks as it turns out, and one that demanded a more humanoid form
Go to a door and open it
Walk through the door into a room
In the room, go up to a valve with circular handle and turn it 360 degrees
Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
Perform a surprise task — in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
Either walk over a field of cinder blocks, or roll through a field of light debris
Climb a set of stairs
The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience — a crowd of thousands and thousands more online — watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty. read more »
As some of you may know, I have been working as chair of computing and networking at Singularity University. The most rewarding part of that job is our ten week summer Graduate Studies Program. GSP15 will be our 7th year of it. This program takes 80 students from around the world (typically over 30 countries and only 10-15% from North America) and gives them 5 weeks of lectures on technology trends in a dozen major fields, and then 5 weeks of forming into teams to try to apply that knowledge and thinking to launch projects that can seriously change the world. (We set them the goal of having the potential to help a billion people in 10 years.)
The classes have all been fantastic, and many of the projects have gone on to be going concerns. A lot of the students come in with one plan for their life and leave with another.
It’s about to get better. One big problem was that the program is expensive. Last year we charged almost $30,000 (it includes room and board) and most of the scholarships were sponsored competitions in different countries and regions. This limits who can come.
Larry Page and Google helped found Singularity U in 2009, and has stepped up massively this year with a scholarship fund that assures that all accepted students will attend free of charge. Students will either get in through one of the global contests, or be accepted by the admissions team and given a full scholarship. It means we’ll be able to select from the best students in the world, regardless of whether they can afford the cost.
In spite of the name, SU is not really about “the singularity” and not anything like a traditional university. The best way to figure it out is to read the testimonials of the graduates.
Students come in many age ranges — we have had early 20s to late 50s, with a mix of backgrounds in technology, business, design and art. Show us you’re a rising star (or a star that has done it before and is ready to do it again even bigger) and consider applying.
Speaking at SU
In the rest of the year we do a lot of shorter programs, from a couple of days to a week, aimed at providing a compressed view of the future of technology and its implications to a different crowd — typically corporate, entrepreneur and investor based. As that grows, we need more speakers, and I’m particularly interested in finding new folks to add related to computing and networking technologies. We do this all over the planet, which can be a mix of rewarding and draining, though about half the events are in Silicon Valley. There are 3 things I am looking for:
The chops and expertise in your field to do a cutting edge talk — why do we start listening to you?
Great speaking skills — why do we keep listening to you?
All else being equal, I seek more great female and minority speakers to reverse Silicon Valley’s imbalances, which we suffer as well.
Is this you, or do you have somebody to recommend? Contact me (firstname.lastname@example.org) for more details. While top-flight people generally have some of their own work to talk about, and I do use speakers sometimes on very specific topics, the ideal speaker is a great teacher who can cover many topics for audiences who are very smart but not always from engineering backgrounds.
Our next public event is March 12-14 in Seville, Spain — if you’re in Europe try to make it.
In August, I attended the World Science Fiction Convention (WorldCon) in London. I did it while in Coeur D’Alene, Idaho by means of a remote Telepresence Robot(*). The WorldCon is half conference, half party, and I was fully involved — telepresent there for around 10 hours a day for 3 days, attending sessions, asking questions, going to parties. Back in Idaho I was speaking at a local robotics conference, but I also attended a meeting back at the office using an identical device while I was there.
After doing this, I have written up a detailed account of what it’s like to attend a conference and social event using these devices, how fun it is now, and what it means for the future.
For those of you in the TL;DR crowd, the upshot is that it works. No, it’s not as good as being there in person. But it is a substantial fraction of the way there, and it’s going to get better. I truly feel I attended that convention, but I didn’t have spend the money and time required to travel to London, and I was able to do other things in Idaho and California at the same time.
When you see at new technology that seems not quite there yet, you have to decide — is this going to get better and explode, or is it going to fizzle. I’m voting for the improvement argument. It won’t replace being there all of the time, but it will replace being there some of the time, and thus have big effects on travel — particularly air travel — and socialization. There are also interesting consequences
for the disabled, for the use of remote labour and many other things.
(*)As the maker will point out, this is not technically a robot, just a remote controlled machine. Robots have sensors and make some of their own decisions on how they move.
I don’t know who the person or people are who, under the name Satoshi Nakamoto, created the Bitcoin system. The creator(s) want to keep their privacy, and given the ideology behind Bitcoin, that’s not too surprising.
There can only be 21 million bitcoins. It is commonly speculated that Satoshi did much of the early mining, and owns between 1 million and 1.5 million unspent bitcoins. Today, thanks in part to a speculative bubble, bitcoins are selling for $800, and have been north of $1,000. In other words, Satoshi has near a billion dollars worth of bitcoin. Many feel that this is not an unreasonable thing, that a great reward should go to Satoshi for creating such a useful system.
For Satoshi, the problem is that it’s very difficult to spend more than a small portion of this block, possibly ever. Bitcoin addresses are generally anonymous, but all transactions are public. Things are a bit different for the first million bitcoins, which went only to the earliest adopters. People know those addresses, and the ones that remain unspent are commonly believed to be Satoshi’s. If Satoshi starts spending them in any serious volume, it will be noticed and will be news.
The fate of Bitcoin
Whether Bitcoin becomes a stable currency in the future or not, today few would deny it is not stable, and undergoing speculative bubbles. Some think that because nothing backs the value of bitcoins, it will never become stable, but others are optimistic. Regardless of that, today the value of a bitcoin is fragile. The news that “Satoshi is selling his bitcoins!” would trigger panic selling, and that’s bad news in any bubble.
If Satoshi could sell, it is hard to work out exactly when the time to sell would be. Bitcoin has several possible long term fates:
It could become the world’s dominant form of money. If it replaced all of the “M1” money supply in the world (cash and very liquid deposits) a bitcoin could be worth $1 million each!
It could compete with other currencies (digital and fiat) for that role. If it captured 1% of world money supply, it might be $10,000 a coin. While there is a limit on the number of bitcoins, the limit on the number of cryptocurrencies is unknown, and as bitcoin prices and fees increase, competition is to be expected.
It could be replaced by one or more successors of superior design, with some ability to exchange during a modest window, and then drifting down to minimal value
It could collapse entirely and quickly in the face of government opposition, competition and other factors during its bubble phase.
My personal prediction is #3 — that several successor currencies will arise which fix issues with Bitcoin, with exchange possible for a while. However, just as bitcoins had their sudden rushes and bubbles, so will this exchange rate, and as momentum moves into this currency it could move very fast. Unlike exchanges that trade bitcoins for dollars, inter-cryptocurrency exchanges will be fast (though the settlement times of the currencies will slow things down.) It could be even worse if the word got out that “Satoshi is trading his coins for [Foo]Coin” as that could cause complete collapse of Bitcoin.
Perhaps he could move some coins through randomizing services that scramble the identity association, but moving the early coins to such a system would be seen as selling them. read more »
It’s been a while since I’ve done a major new article on long-term consequences of Robocars. For some time I’ve been puzzling over just how our urban spaces will change because of robocars. There are a lot of unanswered questions, and many things could go both ways. I have been calling for urban planners to start researching the consequences of robocars and modifying their own plans based on this.
While we don’t know enough to be sure, there are some possible speculations about potential outcomes. In particular, I am interested in the future of the city and suburb as robocars make having ample parking less and less important. Today, city planners are very interested in high-density development around transit stops, known as “transit oriented development” or TOD I now forecast a different trend I will call ROD, or robocar oriented development.
For a view of how the future of the city might be quite interesting, in contrast to the WALL-E car-dominant vision we often see.
Earlier I wrote an essay on robocar changes affecting urban planning which outlined various changes and posed questions about what they meant. In this new essay, I propose answers for some of those questions. This is a somewhat optimistic essay, but I’m not saying this is a certain outcome by any means.
As always, while I do consult for Google’s project, they don’t pay me enough to be their spokesman. This long-term vision is a result of the external work found on this web site, and should not be taken to imply any plans for that project.
There’s been much debate in the USA about High Speed Rail (HSR) and most notably the giant project aimed at moving 20 to 24 million passengers a year through the California central valley, and in particular from downtown LA to downtown San Francisco in 2 hours 40 minutes.
There’s been big debate about the projected cost ($68B to $99B) and the inability of projected revenues to cover interest on the capital let alone operating costs. The project is beginning with a 130 mile segment in the central valley to make use of federal funds. This could be a “rail to nowhere” connecting no big towns and with no trains on it. By 2028 they plan to finally connect SF and LA.
The debate about the merits of this train is extensive and interesting, but its biggest flaw is that it is rooted in the technology of the past and present day. Indeed, HSR itself is around 50 years old, and the 350 kph top speed of the planned line was attained by the French TGV over 30 years ago.
The reality of the world, however, is that technology is changing very fast, and in some fields like computing at an exponential rate. Transportation has not been used to such rapid rates of change, but that protection is about to end. HSR planners are comparing their systems to other 20th century systems and not planning for what 2030 will actually hold.
At Singularity University, our mission is to study and teach about the effects of these rapidly changing technologies. Here are a few areas where new technology will disrupt the plans of long-term HSR planners:
Cars that can drive and deliver themselves left the pages of science fiction and entered reality in the 2000s thanks to many efforts, including the one at Google. (Disclaimer: I am a consultant to, but not a spokesman for that team.)
Readers of my own blog will know it is one of my key areas of interest.
By 2030 such vehicles are likely to be common, and in fact it’s quite probable they will be able to travel safely on highways at faster speeds than we trust humans to drive. They could also platoon to become more efficient.
Their ability to deliver themselves is both boon and bane to rail transit. They can offer an excellent “last/first mile” solution to take people from their driveways to the train stations — for it is door to door travel time that people care about, not airport-to-airport or downtown-to-downtown. The HSR focus on a competitive downtown-to-downtime time ignores the fact that only a tiny fraction of passengers will want that precise trip.
Self-delivering cars could offer the option of mobility on demand in a hired vehicle that is the right vehicle for the trip — often a light, efficient single passenger vehicle that nobody would buy as their only car today. These cars will offer a more convenient and faster door-to-door travel time on all the modest length trips (100 miles or less) in the central valley. Because the passenger count estimates for the train exceed current air-travel counts in the state, they are counting heavily on winning over those who currently drive cars in the central valley, but they might not win many of them at all.
The cars won’t beat the train on the long haul downtown SF to downtown LA. But they might well be superior or competitive (if they can go 100mph on I-5 or I-99) on the far more common suburb-to-suburb door to door trips. But this will be a private vehicle without a schedule to worry about, a nice desk and screen and all the usual advantages of a private vehicle.
Improved Air Travel
The air travel industry is not going to sit still. The airlines aren’t going to just let their huge business on the California air corridor disappear to the trains the way the HSR authority hopes. These are private companies, and they will cut prices, and innovate, to compete. They will find better solutions to the security nightmare that has taken away their edge, and they’ll produce innovative products we have yet to see. The reality is that good security is possible without requiring people arrive at airports an hour before departure, if we are driven to make it happen. And the trains may not remain immune from the same security needs forever.
On the green front, we already see Boeing’s new generation of carbon fiber planes operating with less fuel. New turboprops are quiet and much more efficient, and there is more to come.
The fast trains and self-driving cars will help the airports. Instead of HSR from downtown SF to downtown LA, why not take that same HSR just to the airport, and clear security while on the train to be dropped off close to the gate. Or imagine a self-driving car that picks you up on the tarmac as you walk off the plane and whisks you directly to your destination. Driven by competition, the airlines will find a way to take advantage of their huge speed advantage in the core part of the journey.
Self-driving cars that whisk people to small airstrips and pick them up at other small airstrips also offer the potential for good door-to-door times on all sorts of routes away from major airports. The flying car may never come, but the seamless transition from car to plane is on the way.
We may also see more radical improvements here. Biofuels may make air travel greener, and lighter weight battery technologies, if they arrive thanks to research for cars, will make the electric airplane possible. Electric aircraft are not just greener — it becomes more practical to have smaller aircraft and do vertical take-off and landing, allowing air travel between any two points, not just airports.
These are just things we can see today. What will the R&D labs of aviation firms come up with when necesessity forces them towards invention?
Rail technology will improve, and in fact already is improving. Even with right-of-way purchased, adaptation of traditional HSR to other rail forms may be difficult. Expensive, maglev trains have only seen some limited deployment, and while also expensive and theoretical, many, including the famous Elon Musk, have proposed enclosed tube trains (evacuated or pneumatic) which could do the trip faster than planes. How modern will the 1980s-era CHSR technology look to 2030s engineers?
Decades after its early false start, video conferencing is going HD and starting to take off. High end video meeting systems are already causing people to skip business trips, and this trend will increase. At high-tech companies like Google and Cisco, people routinely use video conferencing to avoid walking to buildings 10 minutes away.
Telepresence robots, which let a remote person wander around a building, go up to people and act more like they are really there are taking off and make more and more people decide even a 3 hour one-way train trip or plane trip is too much. This isn’t a certainty, but it would also be wrong to bet that many trips that take place today just won’t happen in the future.
Like it or not, in many areas, sprawl is increasing. You can’t legislate it away. While there are arguments on both sides as to how urban densities will change, it is again foolish to bet that sprawl won’t increase in many areas. More sprawl means even less value in downtown-to-downtown rail service, or even in big airports. Urban planners are now realizing that the “polycentric” city which has many “downtowns” is the probable future in California and many other areas.
That Technology Nobody Saw Coming
While it may seem facile to say it, it’s almost assured that some new technology we aren’t even considering today will arise by 2030 which has some big impact on medium distance transportation. How do you plan for the unexpected? The best way is to keep your platform as simple as possible, and delay decisions and implementations where you can. Do as much work with the knowledge of 2030 as you can, and do as little of your planning with the knowledge of 2012 as you can.
That’s the lesson of the internet and the principle known as the “stupid network.” The internet itself is extremely simple and has survived mostly unchanged from the 1980s while it has supported one of history’s greatest whirlwinds of innovation. That’s because of the simple design, which allowed innovation to take place at the edges, by small innovators. Simpler base technologies may seem inferior but are actually superior because they allow decisions and implementations to be delayed to a time when everything can be done faster and smarter. Big projects that don’t plan this way are doomed to failure.
None of these future technologies outlined here are certain to pan out as predicted — but it’s a very bad bet to assume none of them will. California planners and the CHSR authority need to do an analysis of the HSR operating in a world of 2030s technology and sprawl, not today’s.
For years I have posed the following question at parties and salons:
By the 25th century, who will be the household names of the 20th century?
My top contender, Armstrong, has died today. I pick him because the best known name of the 15th century is probably Columbus, also known as the first explorer to a major location — even though he probably wasn’t the actual first.
Oddly, while we will celebrate him today and for years to come, Armstrong was able to walk down the street for the past few decades unlikely to be recognized in his own time. Though I had his photo on my wall as a child (along with Aldrin and Collins.) They were the only faces I ever put on my wall, my childhood heroes. I was not alone in this.
Unlike Columbus, who led his expedition, Armstrong was one of a very large team, the one picked for the most prominent role. He was no mere cog of course, and his flying made the difference in having a successful mission.
Others of the 15th century who are household names today are:
Henry V (thanks to Shakespeare, I suspect) and Richard III
Vlad the Impaler (thanks to legends)
Some artists (Bosch, Botticelli)
Amerigo Vespucci (only by virtue of getting two continents named after him)
As we see, some are famous by accident (writers etc. picked up their stories.) That may even be true for Jeanne d’Arc whose story would mostly only have been preserved in French lore.
The great inventors and scientists like Gutenberg and Leonardo give a clue to help. Guru Nanak founded a major religion but his name is not know well outside that religion.
So while many people suggest Hitler will be one of the names, I am more doubtful. I think it would be appropriate if his evil is forgotten, after all he wasn’t even the greatest butcher of the 20th century.
No, I think the fame will go to explorers and scientists, and possibly some artists from our time. We may not even know what names will be romantacised. Some candidates I suspect are:
Drexler or Feynman if nanotechnology as they envisioned it arrives
Crick and Watson (or even Venter) if control of DNA is seen as central
Von Neumann, Turing or others if computers are seen as the great invention of the 20th century (which they may be.)
It’s hard to say what music, writing, movies or other art will endure and be remembered. Did the 20th century get a Shakespeare?
What are your nominations? Of the people I list above, once agan all of them were capable of walking down the street without being recognized, just as Armstrong could. I suspect in the pre-camera days, so could Columbus and Gutenberg.
A month ago I hosted Vernor Vinge for a Bay Area trip. This included my interview with Vinge at Google on his career and also a special talk that evening at Singularity University. In the 1980s, Vinge coined the term “the singularity” even though Ray Kurzweil has become the more public face of the concept of late.
He did not disappoint with an interesting talk on what he called “group minds.” He does not refer to the literal group minds that his characters known as the Tines have in his Zone novels, but rather all the various technologies that are allowing ordinary humans to collaborate in new ways that allow problems to be solved at a speed and scale not seen before. In puzzling over the various paths to the singularity — which means to him the arrival of an intelligence beyond our own — he and others have mostly put the focus on the creation of AI at human level and beyond. He points out that tools which use elements of AI to combine human thinking may generate a path to the singularity that is more probably benign.
In the talk he outlines a taxonomy of group minds, different ways in which they might form and exist, to help understand the space.
Vernor Vinge is perhaps the greatest writer of hard SF and computer-related SF today. He has won 5 Hugo awards, including 3 in a row for best novel (nobody has done 4 in a row) and his novels have inspired many real technologies in cyberspace, augmented reality and more.
I invited him up to speak at Singularity University but before that he visited Google to talk in the Authors@Google series. I interview him about his career and major novels and stories, including True Names, A Fire Upon the Deep, Rainbow’s End and his latest novel Children of the Sky. We also talk about the concept of the Singularity, for which he coined the term.
It’s been interesting to see how TV shows from the 60s and 70s are being made available in HDTV formats. I’ve watched a few of Classic Star Trek, where they not only rescanned the old film at better resolution, but also created new computer graphics to replace the old 60s-era opticals. (Oddly, because the relative budget for these graphics is small, some of the graphics look a bit cheesy in a different way, even though much higher in technical quality.)
The earliest TV was shot live. My mother was a TV star in the 50s and 60s, but this was before videotape was cheap. Her shows all were done live, and the only recording was a Kinescope — a film shot off the TV monitor. These kinneys are low quality and often blown out. The higher budget shows were all shot and edited on film, and can all be turned into HD. Then broadcast quality videotape got cheap enough that cheaper shows, and then even expensive shows began being shot on it. This period will be known in the future as a strange resolution “dark ages” when the quality of the recordings dropped. No doubt they will find today’s HD recordings low-res as well, and many productions are now being shot on “4K” cameras which have about 8 megapixels.
But I predict the future holds a surprise for us. We can’t do it yet, but I imagine software will arise that will be able to take old, low quality videos and turn them into some thing better. They will do this by actually modeling the scenes that were shot to create higher-resolution images and models of all the things which appear in the scene. In order to do this, it will be necessary that everything move. Either it has to move (as people do) or the camera must pan over it. In some cases having multiple camera views may help.
When an object moves against a video camera, it is possible to capture a static image of it in sub-pixel resolution. That’s because the multiple frames can be combined to generate more information than is visible in any one frame. A video taken with a low-res camera that slowly pans over an object (in both dimensions) can produce a hi-res still. In addition, for most TV shows, a variety of production stills are also taken at high resolution, and from a variety of angles. They are taken for publicity, and also for continuity. If these exist, it makes the situation even easier. read more »
Not much new to report after the second game of the Watson Jeopardy Challenge. I’ve added a few updates to yesterday’s post on Watson and the result was as expected, though Watson struggled a lot more in this game than in the prior round, deciding not to answer many questions due to low confidence and making a few mistakes. In a few cases it was saved by not buzzing fast enough even though it had over 50% confidence, as it would have answered slightly wrong.
Some quick updates from yesterday you will also find in the comments:
Toronto’s 2nd busiest airport, the small Island airport, has the official but rarely used name of Billy Bishop. Bishop was one of the top flying aces of WWI, not WWII. Watson’s answer is still not clear, but that it made mistakes like this is not surprising. That it made so few is surprising
You can buzz in as soon as Trebek stops speaking. If you buzz early, you can’t buzz again for 0.2 seconds. Watson gets an electronic signal when it is time to buzz, and then physically presses the button. The humans get a light, but they don’t bother looking at it, they try timing when Trebek will finish. I think this is a serious advantage for Watson.
This IBM Blog Post gives the details on the technical interface between Watson and the game.
Watson may have seemed confident with its large bet of $17,973. But in fact the bet was fixed in advance:
Had Jennings bet his whole purse (and got it right) he would have ended up with $41,200.
If Watson had lost his bet of 17,973, he would have ended up with $41,201 and bare victory.
Both got it right, and Jennings bet low, so it ended up being $77,147 to $24,000.
Jennings’ low bet was wise at it assured him of 2nd place and a $300K purse instead of $200K. Knowing he could not beat Watson unless Watson bet stupidly, he did the right thing.
Jennings still could have bet more and got 2nd, but there was no value to it, the purse is always $300K
If Watson had wanted to 2nd guess, it might have realized Jennings would do this and bet appropriately but that’s not something you can do more than once.
It still sure seemed like a program sponsored by IBM. But I think it would have been nice if the PI of DeepQA was allowed up on stage for the handshake.
I do wish they had programmed a bit of sense of humour into Watson. Fake, but fun.
Amusingly Watson got a category about computer keyboards and didn’t understand it.
Unlike the human players who will hit the buzzer before they have formed the answer in their minds, in hope that they know it, Watson does not hit unless it has computed a high confidence answer.
Watson would have bombed on visual or audio clues. The show has a rule allowing those to be removed from the game for a disabled player, these were applied!
A few of the questions had some interesting ironies based on what was going on. I wonder if that was deliberate or not. To be fair, I would think the question-writers would not be told what contest they were writing for.
The computer scientist world is abuzz with the game show world over the showdown between IBM’s “Watson” question-answering system and the best human players to play the game Jeopardy. The first game has been shown, with a crushing victory by Watson (in spite of a tie after the first half of the game.)
Tomorrow’s outcome is not in doubt. IBM would not have declared itself ready for the contest without being confident it would win, and they wouldn’t be putting all the advertising out about the contest if they had lost. What’s interesting is how they did it and what else they will be able to do with it.
Dealing with a general question has long been one of the hard problems in AI research. Watson isn’t quite there yet but it’s managed a great deal with a combination of algorithmic parsing and understanding combined with machine learning based on prior Jeopardy games. That’s a must because Jeopardy “answers” (clues) are often written in obfuscated styles, with puns and many idioms, exactly the sorts of things most natural language systems have had a very hard time with.
Watson’s problem is almost all understanding the question. Looking up obscure facts is not nearly so hard if you have a copy of Wikipedia and other databases on hand, particularly one parsed with other state-of-the-art natural language systems, which is what I presume they have. In fact, one would predict that Watson would do the best on the hardest $2,000 questions because these are usually hard because they refer to obscure knowledge, not because it is harder to understand the question. I expect that an evaluation of its results may show that its performance on hard questions is not much worse than on easy ones. (The main thing that would make easy questions easier would be the large number of articles in its database confirming the answer, and presumably boosting its confidence in its answer.) However, my intuition may be wrong here, in that most of Watson’s problems came on the high-value questions.
It’s confidence is important. If it does not feel confident it doesn’t buzz in. And it has a serious advantage at buzzing in, since you can’t buzz in right away on this game, and if you’re an encyclopedia like the two human champions and Watson, buzzing in is a large part of the game. In fact, a fairer game, which Watson might not do as well at, would involve randomly choosing which of the players who buzz in in the first few tenths of a second gets to answer the question, eliminating any reaction time advantage. Watson gets the questions as text, which is also a bit unfair, unless it is given them one word a time at human reading speed. It could do OCR on the screen but chances are it would read faster than the humans. It’s confidence numbers and results are extremely impressive. One reason it doesn’t buzz in is that even with 3,000 cores it takes 2-6 seconds to answer a question.
Indeed a totally fair contest would not have buzzing in time competition at all, and just allow all players who buzz in to answer an get or lose points based on their answer. (Answers would need to be in parallel.)
Watson’s coders know by now that they probably should have coded it to receive wrong answers from other contestants. In one
instance it repeated a wrong answer, and in another case it said “What is Leg?” after Jennings had incorrectly answered “What is missing an arm?” in a question about an Olympic athlete. The host declared that right, but the judges reversed that saying that it would be right if a human who was following up the wrong answer said it, but was a wrong answer without that context. This was edited out. Also edited out were 4 crashes by Watson that made the game take 4 hours instead of 30 minutes.
It did not happen in what aired so far, but in the trials, another error I saw Watson make was declining to answer a request to be more specific on an answer. Watson was programmed to give minimalist answers, which often the host will accept as correct, so why take a risk. If the host doesn’t think you said enough he asks for a more specific answer. Watson sometimes said “I can be no more specific.” From a pure gameplay standpoint, that’s like saying, “I admit I am wrong.” For points, one should say the best longer phrase containing the one-word answer, because it just might be right. Though it has a larger chance of looking really stupid — see below for thoughts on that.
The shows also contain total love-fest pieces about IBM which make me amazed that IBM is not listed as a sponsor for the shows, other than perhaps in the name “The IBM Challenge.” I am sure Jeopardy is getting great ratings (just having their two champs back would do that on its own but this will be even more) but I have to wonder if any other money is flowing.
Being an idiot savant
Watson doesn’t really understand the Jeopardy clues, at least not as a human does. Like so many AI breakthroughs, this result comes from figuring out another way to attack the problem different from the method humans use. As a result, Watson sometimes puts out answers that are nonsense “idiot” answers from a human perspective. They cut back a lot on this by only having it
answer when it has 50% confidence or higher, and in fact for most of its answers it has very impressive confidence numbers. But sometimes it gives such an answer. To the consternation of the Watson team, it did this on the Final Jeopardy clue, where it answered “Toronto” in the category “U.S. Cities.” read more »
There are many fields that people expect robotics to change in the consumer space. I write regularly about transportation, and many feel that robots to assist the elderly will be the other big field. The first successful consumer robot (outside of entertainment) was the Roomba, a house cleaning robot. So I’ve often wondered about how far we are from a robot that can tidy up the house. People got excited with a PR2 robot was programmed to fold towels.
This is a hard problem because it seems such a robot needs to do general object recognition and manipulation, something we’re pretty far from doing. Special purpose household chore robots, like the Roomba, might appear first. (A gutter cleaner is already on the market.)
Recently I was pondering what we might do with a robot that is able to pick up objects gently, but isn’t that good at recognizing them. Such a robot might not identify the objects, but it could photograph them, and put them in bins. The members of the household could then go to their computers and see a visual catalog of all the things that have been put away, and an indicator of where it was put. This would make it easy to find objects.
The catalog could trivially be sorted by when the items were put away, which might well make it easy to browse for something put away recently. But the fact that we can’t do general object recognition does not mean we can’t do a lot of useful things with photographs and sensor readings (including precise weight and other factors) beyond that. One could certainly search by colour, by general size and shape, and by weight and other characteristics like rigidity. The item could be photographed in a 360 view by being spun on a table or in the grasping arm, or which a rotating camera. It could also be laser-scanned or 3D photographed with new cheap 3D camera techniques.
When looking for a specific object, one could find it by drawing a sketch of the object — software is already able to find photos that are similar to a sketch. But more is possible. Typing in the name of what you’re looking for could bring up the results of a web image search on that string, and you could find a photo of a similar object, and then ask the object search engine to find photos of objects that are similar. While ideally the object was photographed from all angles, there are already many comparison algorithms that survive scaling and rotation to match up objects.
The result would be a fairly workable search engine for the objects of your life that were picked up by the robot. I suspect that you could quickly find your item and learn just exactly where it was.
Certain types of objects could be recognized by the robot, such as books, papers and magazines. For those, bar-codes could be read, or printing could be scanned with OCR. Books might be shelved at random in the library but be easily found. Papers might be hard to manipulate but could at least be stacked, possibly with small divider sheets inserted between them with numbers on them, so that you could look for the top page of any collected group of papers and be told, “it’s under divider 20 in the stack of papers.” read more »
A number of people have been hiring “virtual” assistants in lower-wage countries to do all the tasks in their life that don’t require a personal presence. Such assistants are found starting at a few bucks an hour. I have not done it myself, since for some reason most of the things I feel I could pass on to such an assistant are things that involve some personal presence. (Though I suppose I could just ship off all the papers I need scanned and filed every few weeks to get that out of my life, but I want to have a scanner here too.)
Anyway, last weekend I was talking to an acquaintance about his use of such services. He has his assistant seducing women for him. His assistant, who is female and lives in India, logs onto his account on a popular dating site, browses profiles and (pretending to be him) makes connections with women on the site. She has e-mail conversations and arranges first dates. Then her employer reads the e-mail conversation and goes to the date. (Perhaps he also does a quick vet before arranging a date to be sure the assistant has chosen well, but I did not confirm that.) read more »
I don’t often write about robots that don’t go on roads, but last night I stopped by Willow Garage, the robot startup created by my old friend Scott Hassan. Scott is investing in building open robotics platforms, and giving much of it out free to the world, because he thinks progress in robotics has been far too slow.
Last night they unveiled their beta PR2 robots and gave 11 of them to teams from 11 different schools and labs. Those institutions will be all trying to do something creative with the robots, just as a Berkeley team quickly made it able to fold towels a few months ago.
I must admit, as they marched out the 11 robots and had them do synchronous dance there was a moment (about 2 minutes 20 seconds in that video) when it reminded me of a scene from some techno thriller, where the evil overload unveils his new robots to an applauding crowd, and the robots then turn and kill all the humans. Fortunately this did not happen. The real world is very different, and these robots will do a lot of good. They have a lot of processing power, various nice sensors and 2 arms with 7 degrees of freedom. They run ROS, an open source robot operating system which now runs on many other robots.
I was interested because I have proposed that having an open simulator platform for robocars could also spur development from people without the budgets to build their own robocars (and crash them during testing.) A robocar test model is going to involve at least $150,000 today and will get damaged in development, and that’s beyond small developers. The PR2 beta models cost more than that, but Willow Garage’s donations will let these teams experiment in personal robotics.
Of course, it would be nice for robocars if there were an inexpensive robocar that teams could get and test. Right now though, everybody wants a sensor as nice as the $75,000 Velodyne LIDAR that powered most of the top competitors in the DARPA urban challenge, and you can’t get that cheaply yet — except perhaps in simulator.
It is no coincidence that two friends of mine have both founded companies recently to build telepresence robots. These are easy to drive remote control robots which have a camera and screen at head height. You can inhabit the robot, and drive it around a flat area and talk to people by videoconferencing. You can join meetings, go visit people or inspect a factory. Companies building these robots, initially at high prices, intend to sell them both to executives who want to remotely tour remote offices and to companies who want to give cheaper remote employees a more physical presence back at HQ.
There are also a few super-cheap telepresence robots, such as the Spykee, which runs Skype video conferencing and can be had for as low as $150. It’s not very good, and the camera is very low down, and there’s no screen, but it shows just how cheap such a product can get.
“Anybots” QA telepresence robot
When they get down to a price like that, it seems inevitable to me that we will see an emergency services robot on every block, primarily for use by the police. When there is a police, fire or ambulance call to an address, an officer could immediately connect to the robot on that block and drive it to the scene, to be telepresent. The robot would live in a small, powered protective closet either paid for by the city, but more likely just donated by some neighbour on the block who wants the fastest possible emergency response. Called into action, the robot’s garage door would open and the robot would drive out, and probably be at the location of the emergency within 60 to 120 seconds, depending on how densely they are placed. In the meantime actual first responders might also be on the way.
Watching and managing children is one of the major occupations of the human race. A true robot babysitter is still some time in the future, and getting robocars to the level that we will trust them as safe to carry children is also somewhat in the future, but it will still happen much sooner.
Today I want to explore the implications of a robocar that is ready to safely carry children of certain age ranges. This may be far away because people are of course highly protective of their children. They might trust a friend to drive a child, even though human driving records are poor, because the driver is putting her life on the line just as much as the child’s, while the robot is just programmed to be safe, with no specific self-interest.
A child’s robocar can be designed to higher safety standards than an adult’s, with airbags in all directions, crumple zones designed for a single occupant in the center and the child in a 5-point seatbelt. As you know, with today’s modern safety systems, racecar drivers routinely walk away from crashes at 150mph. Making a car that won’t hurt the child in a 40mph crash is certainly doable, though not without expense. A robocar’s ability to anticipate an accident might even allow it to swivel the seat around so that the child’s back is to the accident, something even better than an airbag.
The big issue is supervision of smaller children. It’s hard to say what age ranges of children people might want to send via robocar. In some ways infants are easiest, as you just strap them in and they don’t do much. All small children today are strapped in solidly, and younger ones are in a rear facing seat where they don’t even see the parent. (This is now recommended as safest up to age 4 but few parents do that.) Children need some supervision, though real problems for a strapped in child are rare. Of course, beyond a certain age, the children will be fully capable of riding with minimal supervision, and by 10-12, no direct supervision (but ability to call upon an adult at any time.) read more »
One of the things that’s harder to predict about robocars is what they will mean for how cities are designed and how they evolve. We’re notoriously bad at predicting such things, but it is still tempting.
A world of robocars offers the potential for something I am dubbing the “poor man’s teleporter.” That’s a fleet of comfortable robotaxis that are, while you are in them, a fully functional working or relaxing environment. Such robotaxis would have a desk and large screen and very high speed wireless net connection. They have a comfy reclining chair (or bed) and anything else you need from the office environment. (Keyboards and mice are problematic, as I have discussed elsewhere, but there may be ways to solve that.)
The robotaxi will deliberately pick the most comfortable route for a trip, with few turns, few stops and gentle acceleration. It will gimbal in corners and have an active suspension system eliminating bumps. The moment you enter it, your desktop could appear on the screen, copied from the desk you left (thanks to communication with one of your wearable devices, probably.) You can do high quality videoconferencing, work on the net, or just watch a video or read a book — the enclosed book reader could be set to the page you were last reading elsewhere. If you work in a building with a lobby, the electric robotaxi could enter the lobby and meet you right at the elevator. It might even go vertical and ride up the elevator to get you during less busy times. (For some real science fiction, the robotaxis in Minority Report somehow climbed the buildings and parked in people’s homes.)
For many it would be as though they had not left their desks. Almost all the trip will be productive time. As such, while people won’t want to spend forever in the car, many might find distance and trip time to not be particularly important, at least not for trips around town during the workday. While everybody wants to get home to family sooner, even commute times could become productive times with employers who let the employee treat the travel time as work time. Work would begin the moment you stepped into the car in the morning.
We’ve seen a taste of this in Silicon Valley, as several companies like Google and Yahoo run a series of commute vans for their employees. These vans have nice chairs, spaces for laptops and wireless connectivity into the corporate network. Many people take advantage of these vans and live in places like San Francisco, which may be an hour-long trip to the office. The companies pay for the van because the employees start the workday when they get on it.
This concept will continue to expand, and I predict it will expand into robocars. The question is, what does it mean to how we live if we eliminate the time-cost of distance from many trips? What if we started viewing our robotaxis as almost like a teleporter, something that takes almost no time to get us where we want to go? It’s not really no-time, of course, and if you have to make a meeting you still have to leave in time to get there. It might be easier for some to view typical 15 minute trips around a tight urban area as no-time while viewing 30-60 minute trips as productive but “different time.”
Will this make us want to sprawl even more, with distance not being important? Or will we want to live closer, so that the trips are more akin to teleportation by being productive, short and highly predictable in duration? It seems likely that if we somehow had a real Star-Trek style transporter, we might all live in country homes and transport on demand to where the action is. That’s not coming, but the no-lost-time ride is. We might not be able to afford a house on the nice-walkable-shops-and-restaurants street, but we might live 2 miles from it and always be able to get to it, with no parking hassle, in 4 minutes of productive time.
What will the concept of a downtown mean in such a world? “Destination” retailers and services, like a movie house, might decide they have no real reason to be in a downtown when everybody is coming by robotaxi. Specialty providers will also see no need to pay a premium to be in a downtown. Right now they don’t get walk-by traffic, but they do like to be convenient to the customers who seek them out. Stores that do depend on walk-by traffic (notably cafes and many restaurants) will want to be in places of concentration and walking.
But what about big corporate offices that occupy the towers of our cities? They go there for prestige, and sometimes to make it easy to have meetings with other downtown companies. They like having lots of services for their employees and for the business. They like being near transit hubs to bring in those employees who like transit. What happens when many of these needs go away?
For many people, the choice of where to live is overwhelmingly dominated by their children — getting them nice, safe neighbourhoods to play in, and getting them to the most desired schools. If children can go to schools anywhere in a robocar, how does that alter the equation? Will people all want bigger yards in which to cacoon their children, relying on the robocar to take the children to play-dates and supervised parks? Might they create a world where the child goes into the garage, gets in the robocar and tells it to go to Billy’s house, and it deposits the child in that garage, never having been outside — again like a teleporter to the parents? Could this mean a more serious divorce between community and geography?
While all this is going on, we’re also going to see big strides in videoconferencing and virtual reality, both for adults, and as play-spaces for adults and children. In many cases people will be interacting through a different sort of poor man’s teleporter, this one taking zero time but not offering physical contact.
Clearly, not all of these changes match our values today. But what steps that make sense could we actually take to promote our values? It doesn’t seem possible to ban the behaviours discussed above, or even to bend them much. What do you think the brave new city will look like?
It is often said that cars caused the suburbanization of cities. However, people didn’t decide they wanted a car lifestyle and thus move where they could drive more. They sought bigger lots and yards, and larger detached houses. They sought quieter streets. While it’s not inherent to suburbs, they also sought better schools for kids and safer neighbourhoods. They gave up having nearby shops and restaurants and people to get those things, and accepted the (fairly high) cost of the car as part of the price. Most often for the kids. Childless and young people like urban life; the flight to the suburbs was led by the parents.
This doesn’t mean they stopped liking the aspects of the “livable city.” Having stuff close to you. Having your friends close to you. Having pleasant and lively spaces to wander, and in which you regularly see your friends and meet other people. Walking areas with interesting shops and restaurants and escape from the hassles of parking and traffic. They just liked the other aspects of sprawl more.
They tried to duplicate these livable areas with shopping malls. But these are too sterile and corporate — but they are also climate controlled and safer and caused the downfall of many downtowns. Then big box stores, more accessible from the burbs, kept at that tack.
The robotaxi will allow people to get more of what they sought from the “livable city” while still in sprawl. It will also let them get more of what they sought from the suburbs, in terms of safety and options for their children. They may still build pleasant pedestrian malls in which one can walk and wander among interesting things, but people who live 5 miles away will be able to get to them in under 10 minutes. They will be delivered right into the pedestrian zone, not to a sprawling parking lot. They won’t have to worry about parking, and what they buy could be sent to their home by delivery robot — no need to even carry it while walking among shops. They will seek to enjoy the livable space from 5 miles away the same way that people today who live 4 blocks away enjoy those spaces.
But there’s also no question that there will continue to be private malls trying to meet this need. Indeed the private malls will probably offer free or validated robotaxi service to the mall, along with delivery, if robotaxi service is as cheap as I predict it can be. Will the public spaces, with their greater variety and character be able to compete? They will also have weather and homeless people and other aspects of street life that private malls try to push away.
The arrival of the robocar baby-sitter, which I plan to write about more, will also change urban family life. Stick the kid in the taxi and send him to the other parent, or a paid sitter service, all while some adult watches on the video and redirects the vehicle to one of a network of trusted adults if some contingency arises. Talk about sending a kid to a time-out!