The instinct of many transportation planners is to make "smart infrastructure," and to try to make plans for it going out 30 years. That's impossible, nobody knows what smart will mean in 5 years. The internet solve this problem, and grew by making the infrastructure as stupid as possible, and it revolutionized the world. The internet teaches lessons for how all infrastructure planning must go in the future -- keep the physical as simple as possible, do everything in the virtual, software layer.
This weekend I went to the finals of the GoFly prize, a Boeing sponsored contest for personal VTOL flying machines. Sadly, nobody was able to build one that could meet all the requirements in the rules, and only a few of the contestants could even fly. That was disappointing, but then so was the first Darpa Grand Challenge.
Most of the world was wowed by the Google Duplex demo, where their system was able to cold-call a hairdresser and make an appointment with her, with the hairdresser unaware she was talking to an AI. The system included human speech mannerisms and the ability to respond to the random phrases the hairdresser through back.
The primary purpose of the city is transportation. Sure, we share infrastructure like sewers and power lines, but the real reason we live in dense cities is so we can have a short travel time to the things in our lives, be they jobs, friends, shopping or anything else.
Sometimes that trip is a walking one, and indeed only the dense city allows walking trips to be short and also interesting. The rest of the trips involve some technology, from the bicycle to the car to the train. All that is about to change.
Earlier I posted my gallery of CES gadgets, and included a photo of the eHang 184 from China, a "personal drone" able, in theory, to carry a person up to 100kg.
Whether the eHang is real or not, some version of the personal automated flying vehicle is coming, and it's not that far away. When I talk about robocars, I am often asked "what about flying cars?" and there will indeed be competition between them. There are a variety of factors that will affect that competition, and many other social effects not yet much discussed.
The VTOL Multirotor
There are two visions of the flying car. The most common is VTOL -- vertical takeoff and landing -- something that may have no wheels at all because it's more a helicopter than a car or airplane. The recent revolution in automation and stability for multirotor helicopters -- better known as drones -- is making people wonder when we'll get one able to carry a person. Multirotors almost exclusively use electric motors because you must adjust speed very quickly to get stability and control. You also want the redundancy of multiple motors and power systems, so you can lose a rotor or a battery and still fly.
This creates a problem because electric batteries are heavy. It takes a lot of power to fly this way. Carrying more batteries means more weight -- and thus more power needed to carry the batteries. There are diminishing returns, and you can't get much speed, power or range before the batteries are dead. OK in a 3 kilo drone, not OK in a 150 kilo one.
Lots of people are experimenting with combining multirotor for takeoff and landing, and traditional "fixed wing" (standard airplane) designs to travel any distance. This is a great deal more efficient, but even so, still a challenge to do with batteries for long distance flight. Other ideas including using liquid fuels some way. Those include just using a regular liquid fuel motor to run a generator (not very efficient) or combining direct drive of a master propeller with fine-control electric drive of smaller propellers for the dynamic control needed.
Another interesting option is the autogyro, which looks like a helicopter but needs a small runway for takeoff.
The traditional aircraft
Some "flying car" efforts have made airplanes whose wings fold up so they can drive on the road. These have never "taken off" -- they usually end up a compromise that is not a very good car or a very good plane. They need airports but you can keep driving from the airport. They are not, for now, autonomous.
Some want to fly most of their miles, and drive just short distances. Some other designs are mostly for driving, but have an ability to "short hop" via parasailing or autogyro flying when desired.
HBO released a new version of "Westworld" based on the old movie about a robot-based western theme park. The show hasn't excited me yet -- it repeats many of the old tropes on robots/AI becoming aware -- but I'm interested in the same thing the original talked about -- simulated experiences for entertainment.
The new show misses what's changed since the original. I think it's more likely they will build a world like this with a combination of VR, AI and specialty remotely controlled actuators rather than with independent self-contained robots.
One can understand the appeal of presenting the simulation in a mostly real environment. But the advantages of the VR experience are many. In particular, with the top-quality, retinal resolution light-field VR we hope to see in the future, the big advantage is you don't need to make the physical things look real. You will have synthetic bodies, but they only have to feel right, and only just where you touch them. They don't have to look right. In particular, they can have cables coming out of them connecting them to external computing and power. You don't see the cables, nor the other manipulators that are keeping the cables out of your way (even briefly unplugging them) as you and they move.
This is important to get data to the devices -- they are not robots as their control logic is elsewhere, though we will call them robots -- but even more important for power. Perhaps the most science fictional thing about most TV robots is that they can run for days on internal power. That's actually very hard.
The VR has to be much better than we have today, but it's not as much of a leap as the robots in the show. It needs to be at full retinal resolution (though only in the spot your eyes are looking) and it needs to be able to simulate the "light field" which means making the light from different distances converge correctly so you focus your eyes at those distances. It has to be lightweight enough that you forget you have it on. It has to have an amazing frame-rate and accuracy, and we are years from that. It would be nice if it were also untethered, but the option is also open for a tether which is suspended from the ceiling and constantly moved by manipulators so you never feel its weight or encounter it with your arms. (That might include short disconnections.) However, a tracking laser combined with wireless power could also do the trick to give us full bandwidth and full power without weight.
It's probably not possible to let you touch the area around your eyes and not feel a headset, but add a little SF magic and it might be reduced to feeling like a pair of glasses.
The advantages of this are huge:
- You don't have to make anything look realistic, you just need to be able to render that in VR.
- You don't even have to build things that nobody will touch, or go to, including most backgrounds and scenery.
- You don't even need to keep rooms around, if you can quickly have machines put in the props when needed before a player enters the room.
- In many cases, instead of some physical objects, a very fast manipulator might be able to quickly place in your way textures and surfaces you are about to touch. For example, imagine if, instead of a wall, a machine with a few squares of wall surface quickly holds one out anywhere you're about to touch. Instead of a door there is just a robot arm holding a handle that moves as you push and turn it.
- Proven tricks in VR can get people to turn around without realizing it, letting you create vast virtual spaces in small physical ones. The spaces will be designed to match what the technology can do, of course.
- You will also control the audio and cancel sounds, so your behind-the-scenes manipulations don't need to be fully silent.
- You do it all with central computers, you don't try to fit it all inside a robot.
- You can change it all up any time.
In some cases, you need the player to "play along" and remember not to do things that would break the illusion. Don't try to run into that wall or swing from that light fixture. Most people would play along.
For a lot more money, you might some day be able to do something more like Westworld. That has its advantages too:
- Of course, the player is not wearing any gear, which will improve the reality of the experience. They can touch their faces and ears.
- Superb rendering and matching are not needed, nor the light field or anything else. You just need your robots to get past the uncanny valley
- You can use real settings (like a remote landscape for a western) though you may have a few anachronisms. (Planes flying overhead, houses in the distance.)
- The same transmitted power and laser tricks could work for the robots, but transmitting enough power to power a horse is a great deal more than enough to power a headset. All this must be kept fully hidden.
The latter experience will be made too, but it will be more static and cost a lot more money.
Yes, there will be sex
Warning: We're going to get a bit squicky here for some folks.
Westworld is on HBO, so of course there is sex, though mostly just a more advanced vision of the classic sex robot idea. I think that VR will change sex much sooner. In fact, there is already a small VR porn industry, and even some primitive haptic devices which tie into what's going on in the porn. I have not tried them but do not imagine them to be very sophisticated as yet, but that will change. Indeed, it will change to the point where porn of this sort becomes a substitute for prostitution, with some strong advantages over the real thing (including, of course, the questions of legality and exploitation of humans.)
Elon Musk likes to say pretty controversial things off the cuff, and so do I, but he inspired a number of threads by saying at Re:Code that there's a billion to one chance we're living in base reality. To use his word, this world is almost surely a "simulation."
This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:
What you don't hear in this video are the cries of sympathy from the crowd of thousands watching -- akin to when a figure skater might fall down -- or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It's probably better to watch the DARPA official video which has a little audience reaction.
Don't be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.
What you also don't see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren't a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)
We aren't yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:
- Starting in a car, drive it down a simple course with a few turns and park it by a door.
- Get out of the car -- one of the harder tasks as it turns out, and one that demanded a more humanoid form
- Go to a door and open it
- Walk through the door into a room
- In the room, go up to a valve with circular handle and turn it 360 degrees
- Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
- Perform a surprise task -- in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
- Either walk over a field of cinder blocks, or roll through a field of light debris
- Climb a set of stairs
The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience -- a crowd of thousands and thousands more online -- watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.
In August, I attended the World Science Fiction Convention (WorldCon) in London. I did it while in Coeur D'Alene, Idaho by means of a remote Telepresence Robot(*). The WorldCon is half conference, half party, and I was fully involved -- telepresent there for around 10 hours a day for 3 days, attending sessions, asking questions, going to parties. Back in Idaho I was speaking at a local robotics conference, but I also attended a meeting back at the office using an identical device while I was there.