In the wake of the election, the big nerd story is the perfect stats-based prediction that Nate Silver of the 538 blog made on the results in every single state. I was following the blog and like all, am impressed with his work. The perfection gives the wrong impression, however. Silver would be the first to point out he predicted Florida as very close with a slight lean for Obama, and while that is what happened, that’s really just luck. His actual prediction was that it was too close to call. But people won’t see that, they see the perfection. I hope he realizes he should try to downplay this. For his own sake, if he doesn’t, he has nowhere to go but down in 2014 and 2016.
But the second reason is stronger. People will put even more faith in polls. Perhaps even not faith, but reasoned belief, because polls are indeed getting more accurate. Good polls that are taken far in advance are probably accurate about what the electorate thinks then, but the electorate itself is not that accurate far in advance. So the public and politicians should always be wary about what the polls say before the election.
Silver’s triumph means they may not be. And as the metaphorical Heisenberg predicts, the observations will change the results of the election.
There are a few ways this can happen. First, people change their votes based on polls. They are less likely to vote if they think the election is decided, or they sometimes file protest votes when they feel their vote won’t change things. Vice versa, a close poll is one way to increase turnout, and both sides push their voters to make the difference. People are going to think the election is settled because 538 has said what people are feeling.
The second big change has already been happening. Politicians change their platforms due to the polls. Danny Hillis observed some years ago that the popular vote is almost always a near tie for a reason. In a two party system, each side regularly runs polls. If the polls show them losing, they move their position in order to get to 51%. They don’t want to move to 52% as that’s more change than they really want, but they don’t want to move to less than 50% or they lose the whole game. Both sides do this, and to some extent the one with better polling and strategy wins the election. We get two candidates, each with a carefully chosen position designed to (according to their own team) just beat the opposition, and the actual result is closer to a random draw driven by chaotic factors.
Well, not quite. As Silver shows, the electoral college stops that from happening. The electoral college means different voters have different value to the candidates, and it makes the system pretty complex. Instead of aiming for a total of voters, you have to worry that position A might help you in Ohio but hurt you in Florida, and the electoral votes happen in big chunks which makes the effect of swing states more chaotic. Thus poll analysis can tell you who will win but not so readily how to tweak things to make the winner be you. The college makes small differences in overall support lead to huge differences in the college.
In Danny’s theory, the two candidates do not have to be the same, they just have to be the same distance from a hypothetical center. (Of course to 3rd parties the two candidates do tend to look nearly identical but to the members of the two main parties they look very different.)
Show me the money?
Many have noted that this election may have cost $6B but produced a very status quo result. Huge money was spent, but opposed forces also spent their money, and the arms race just led to a similar balance of power. Except a lot of rich donors spent a lot of their money, got valuable access to politicians for it, and some TV stations in Ohio and a few other states made a killing. The fear that corporate money would massively swing the process does not appear to have gained much evidence, but it’s clear that influence was bought.
I’m working on a solution to this, however. More to come later on that.
While there have been some fairly good ballot propositions (such as last night’s wins for Marijuana and marriage equality) I am starting to doubt the value of the system itself. As much as you might like the propositions you like, if half of the propositions are negative in value, the system should be scrapped. Indeed, if only about 40% are negative, it should still be scrapped because of the huge cost of the system itself. read more »
Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book “Bowl of Heaven.” Here’s a Youtube video of my session. They did a review of the history of SF about “big dumb objects” — stories like Niven’s Ringworld, where a huge construct is a central part of the story.
On Tuesday I stopped by the Atlantic’s Big Science day where Chris Gerdes of Stanford’s CARS centre announced results of their race between a robocar and humans on a racetrack. The winner — the humans, but only by a small margin. The CARS team actually studied human driver actions to program their car, but found the human drivers have gotten very good at squeezing most of the available performance out of the vehicles, leaving little room for the robot to improve.
Another result reaffirmed studies of passenger reactions. People taken through the first lap were quite scared, but by the next lap they relaxed and gained confidence in the system. This result shows up time and time again, and has convinced me that while many people tell me they think robocars will not become popular because people will be too scared to ride them, those people are wrong, even about their own behaviour. Most of them, at least.
Also on the Stanford front, Bryant Walker Smith, who has decided to make robocar law a specialty, has released an analysis of the legality of robocars in the USA. The conclusion — robocars which have a human occupant who can take the wheel in the event of a problem are probably legal in almost all states, not just the states that have explicitly made them legal.
DARPA humanoid robot contest includes driving
DARPA ran the 3 grand challenges for robocars but stopped in 2007 after the urban challenge. Their latest challege contest involves making humanoid robots, but the DARPA Robotics Challenge includes a phase where your robot should be able to do a variety of tasks on rough terrain, including getting into a car and driving it. There are 4 tracks to the challenge. 3 are in the physical world, with either provided robots or team-built robots. The 4th is in the virtual world which will allow smaller teams to compete without the cost of working with a physical robot. I have written before about the opportunities of a robocar simulator for testing and contests, and so I am eager to see how this simulator develops.
Research in China has advanced. The National Natural Science Foundation has announced the goal of diong a short drive near Beijing and finally a long trip all the way to Shenzhen, 2400km away. This project primarily uses vision and radar, so it will be interesting to see if they can do this reliably without lasers.
Three big automaker announcements — and not about V2V even though the ITS World Congress is going on this week.
First, Nissan, whose self-parking Leaf I just wrote about has also announced a steer-by-wire system and tests of a car that will swerve to avoid a sudden obstacle. Of course almost all cars have power steering, but in a steer-by-wire car there is no mechanical linkage by default from the steering wheel to the steering motors. This allows a wheel to have “software defined feel” and is good for eventual robocars. In such cars a fail-safe restores a mechanical link if the main system fails.
However, the swerving car, which is demonstrated avoiding a cardboard pedestrian which jumps out into the road, is a new level of technology for major car makers. (Braking for these obstacles has been done for a while.)
Not much later, Jerusalem company MobilEye announced they had converted an Audi A7 to self-drive using 5 of their cameras as well as radar. MobilEye makes the vision system found in a lot of different cars — their specialty is a dedicated chip for vision processing. This article, which is in Hebrew outlines the car, which cost 588K NIS to build.
Volvo, which uses MobilEye, announced today that their 2014 cars would feature a traffic jam assist. Several companies have announced traffic jam assist (which is a low speed lane-keeping plus ACC) but Volvo has put a firm date on it. Also new in Volvo’s system is doing more than following lane markers — it also swerves to follow the car in front of it, as long as that car stays in the lane.
Of course, this does leave open the question of what happens if 2 or more of these start following one another, but that’s some time in the future and they have time to work on it.
In other news, the NHTSA has announced a grant to a team at Virginia Tech to research safety standards for robocar user interfaces. They have in the past stated they think the handoff between manual and automatic is an important safety function they might regulate.
And yes, there is lots of V2V news from the ITS world congress, but my skepticism for most forms of V2V remains high.
Nissan is showing a modified Leaf able to do “valet” park in a controlled parking lot. The leaf downloads a map of the lot, and then, according to Nissan engineers, is able to determine its position in the lot with 4 cameras, then hunt for a spot and go into it. We’ve seen valet park demonstrations before, but calculating position entirely with cameras is somewhat new, mainly because of the issues with how lighting conditions vary. In an indoor parking garage it’s a different story, and camera based localization under the constant lighting should be quite doable.
This other video from Engadget with a more detailed demo shows the view from the car’s cameras, which appear to be on the side mirrors as well as front and back for a synthetic 360 degree view. They also have an Android app for control and the ability to view through the cameras. Alas, chances are low you would get that bandwidth in the parking garage, but it’s a cool demo.
There was a huge raft of press coverage after last week’s signing of the California law. This ranged from polls showing strong acceptance of the tech to editorial critiques about the law being too fanciful or the technology taking jobs. (It is true that there will be job displacement, but at the same time, Americans spend about 50 billion hours driving which is a much larger sink on the GDP.)
Tonight I will be on a panel at the Palo Alto International Film Festival at 5pm. Not on robocars, but on the role of science fiction in movies in changing the world. (In a past life, I published science fiction and am on this panel by virtue of my faculty position at Singularity University.)
A follow-up thought about yesterday’s shuttle fly-by and panorama. I was musing, might this be perhaps the most photographed single thing in human history to date?
Here’s the reasoning. Today there are more cameras and more photographers than ever, and people use them all the time in a way that continues to grow. To be a candidate for a most-photographed event, you would need to be recent, and you would need to take place in front of a ton of people, ideally with notice. It seemed like just about everybody in Sacramento, the Bay Area and LA was out for this and holding up a phone or camera.
Of course, many objects are more photographed, like the Golden Gate Bridge the shuttle flew over, but I’m talking here of the event rather than the object. Of course this is an event which moved over the course of thousands of miles.
The other shuttle fly-overs done over New York and Washington — also with large populations
Total eclipses of the sun which go over highly populated areas. The 2009 eclipse went over Shanghai, Varanasi and many other hugely populated areas but was clouded out for many. Nobody has yet to make a photo of an eclipse that looks like an eclipse, of course — I’ve seen them all, including many of the clever HDRs and overlays — but that doesn’t stop people from trying.
The 1999 eclipse did go over a number of large European cities, but this was before the everybody-is-photographing era
Most lunar eclipses are seen by as much as half the world, though they are hard to photograph with consumer camera gear, and only a fraction of people go out to watch and photograph them, but they could easily be a winner.
Prior to the digital era, a possible winner might be the moon landing. Back in 1969, every family had a camera, though usage wasn’t nearly what it is today. However, I remember the TV giving lessons on how to photograph a TV screen. Everybody was shooting their TV for the launches and the walk on the moon. Terrible pictures (much like early camera phone pictures) but people took them to be a part of the event. I recall taking one myself though I have no idea where it is.
Of course there may be objective ways to measure this today, by tracking the number of photos on photo sharing and social sites, and extrapolating the winner. If the shuttle is the winner for now, it won’t last long. Photography is going to grow even more.
I should also note that remote photography, like we did for Apollo, is clearly much larger, in the form of recording video. For those giant events viewed by billions — World Cup, Olympics, Oscars etc. — huge numbers of people are recording them, at least temporarily.
Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo. We all figured the plane would come along the airstrip, but they surprised us, having it fly a bit to the west so it suddenly appeared from behind the skeleton of Hangar One, the old dirigible hangar. That turned out to be bad for my photography, as I didn’t get much advance notice, and the shot of the crowd I had done a few minutes before had everybody expectantly looking along the runway, and not towards the west where the plane and shuttle appear in my photo.
However, it did make for a very dramatic arrival. So while different parts of this shot are at slightly different times, it does capture the scene of Moffett field and the crowd awaiting the shuttle, and its arrival. I do however have a nice hi-res photo for you to enjoy as well as the panoramic shot of the Endeavour shuttle fly-by.
Tomorrow (Wed Sep 19) I will give a robocars talk at Dorkbot SF in San Francisco. Dorkbot is a regular gathering of “People doing strange things with electricity” and there will be two other sessions.
Last week, the SARTRE project announced it was concluding after a long period of work on highway platooning. Volvo lead the project which demonstrated platoons on test tracks and on some real roads. They also did a number of worthwhile users studies in simulation.
People have been interested in platooning for a while. The main upsides they are looking for are:
It’s much easier than a Robocar — the platoon is lead by a truck with a professional driver who handles everything with human intelligence
Putting the car at short spacings can result in a huge increase in highway capacity, though you tend to want somewhat larger headways around the convoys
There is fuel saving — about 10% or so for the lead vehicle, and up to 30% for following vehicles, at spacings of about 4 to 6m. This is not quite as much as people hoped but it is real.
The equipment in the following cars is simple — V2V radios and possibly some radar for backup.
Unfortunately, platooning comes with some downsides as well
If you have an accident, it can be catastrophic as you might crash a whole convoy of vehicles.
Non-platoon drivers may interfere with the convoy. The gaps must be kept small enough that nobody tries to enter them. A non-member in the middle of the convoy is bad news. You need small gaps to save fuel too.
Trucks must go only at the front of the convoy due to their longer stopping distance. New trucks must insert in the middle. Cars can insert more easily at the end of the convoy.
Convoys in the right lane can make it harder for people merging, and in general they can present a barrier to traffic.
Driving with a short gap is disconcerting. Behind a truck, you can’t even see the lane markers.
In rain, your windshield gets completely washed out with spray (and sometimes salt spray) which is even more disconcerting.
Following cars get hit by small stones and debris from the forward vehicle. After a long period of following, windshields are unacceptably chipped or cracked.
While radar is the primary means of tracking the car in front, and almost all vehicles do a nice radar reflection from the rear licence plate, many vehicles have other reflections further forward. You must avoid trying to follow 4m behind the front of a truck! To help this, vehicles in the tests had superior radar reflectors mounted on them.
For good workable convoys, some of these problems need to be solved. It could be that in rain convoys must spread out (losing a lot of the fuel saving) though there is the danger of cars cutting in.
Convoys with longer gaps can still increase road capacity a lot, but they probably have to be robocar convoys. Robocar convoys can handle cars trying to cut into the gaps. They may wish to start honking if somebody cuts-in (and the car in front might also flash its rear lights and slow slightly to make it very clear to the cut-in that they should not have done this.) This would be a problem when convoys are new, as people might not know what it all means, though they would have tried to go into a space that is clearly too small to safety enter. Cars in convoys might need to have a screen on the back that can display a sign “You have barged into a convoy, change lanes immediately or be reported to police.”
Robocars could handle the rain to some degree, but even their laser sensors would not like operating in heavy spray, though their radars would get excellent returns from a reflector on the vehicles.
The stone chip problem is harder to solve. Robocars capable of full auto operation could try to protect their windshields, but this is disconcerting to occupants. And the rest of the car gets stone chips too.
It could be that platooning is only practical with vehicles that are dedicated to it, such as highway commute vehicles and long distance highway vehicles. Built for this purpose, they would just accept the stone chips as part of life. They might come with extra heavy duty wipers or other ways to deal with the rain. And they would be full robocars, able to handle disconnects and independent operation.
This result will disappoint those who felt platoons were a good early technology. I have felt they also suffered from a critical mass problem. To use a platoon, you would need to find one, and until the density of lead vehicles was high enough, you might not find one. You could do it at rush hour with mobile apps that track the presence of lead vehicles so you can time your departure to find one — you might even have an appointment for every commute. And they might run only on nice clean highways on dry days and still be valuable. But less valuable, I am afraid.
On lower speed roads the fuel saving is not much, but the problems are less. There are traffic lights on most low speed roads though which present another problem.
A round-up of just some of the recent robocar news:
Stanford Shelly at 120mph
While the trip up Pikes Peak by Stanford’s Audi TT did not offer the high speeds we had hoped for, they have recently being doing some real race driving tests, clocking the car around a track at 120mph. Even more impressive because this car drives with limited sensors. Here the goal is to test computer driven high-speed tactics — rounding corners, climbing hills and more. While they didn’t quite reach the times of professional drivers, chances are someday they will, just from the perfect understanding of physics.
Driving this fast is hard in the real world because you’re going beyond the range of most sensors (radar and special lidars can go further, and cameras can see very far but are not reliable in all lighting.) The Stanford team had a closed track to they were able to focus on cornering and skidding.
KPMG report on self-driving cars
The consulting firm KPMG has released an extensive report on self-driving cars. While it doesn’t contain too much that is new to readers of this site and blog, it joins the group which believes that car-to-car communication is going to be necessary for proper deployment of robocars. I don’t think so, and in fact think the idea of waiting for it is dangerous.
Speaking of V2V communication
For some time the V2V developers have been planning a testbed project in Ann Arbor, MI. They’ve equipped 3000 cars with “here I am” transponders that will broadcast their GPS data (position and velocity) along with other car data (brake application, turn signals, etc.) using DSRC. It is hoped that while these 3000 civilian cars will mostly wander around town, there will be times when the density of them gets high enough that some experiments on the success of DSRC can be made. Most of the drivers of the cars work in the same zone, making that possible.
If they don’t prove the technology, they probably won’t get the hoped-for 2013 mandate that all future cars have this technology in them. If they don’t get that, the 75mhz of coveted spectrum allocated to DSRC will get other hungry forces going after it.
I owe readers a deeper analysis of the issues around vehicle-to-vehicle communications.
Google cars clock 300,000 miles
Google announced that our team (they are a consulting client) has now logged 300,000 miles of self-driving, with no accidents caused by the software. It was also acknowledged that the team has also converted a hybrid Lexus RX-450h in addition to the Toyota Prius. Certainly a more
comfortable ride and the new system has very nice looks.
Google will also begin internal testing with team members doing solo commutes in the vehicles. Prior policy is vehicles are always operated off-campus with two staff onboard, as is appropriate in prototype systems.
Political attack ad goes after robocars
Jeff Brandes pushed Florida’s legislation to allow robocar testing and operations in that state, 2nd after Nevada. Now his political opponents have produced an ad which suggests robocars are dangerous and you shouldn’t vote for Mr. Brandes because of his support of them. While we should expect just about anything in attack ads, this is a harbinger of the real debate to come. I doubt the authors of the ads really care about robocars — they just hope to find anything that might scare voters. My personal view, as I have said many times, is that while the technology does have to go through a period where it is less safe because it is being prototyped and developed, the hard truth is that the longer we wait to deploy the technology, the more years we rack up with 34,000 killed on the roads in the USA and 1.2 million worldwide. And Florida’s seniors are among the first on the list to need robocars. Is Jim Frishe’s campaign thinking about that?
Collision Warning strongly pushed in Europe
The EU is considering changing its crash-safety rules so that a car can’t get a 5-star rating unless it has forward collision warning, or even forward-collision mitigation (where the system brakes if you don’t.) These systems are already proving themselves, with data suggesting 15% to 25% reductions in crashes — which is pretty huge. While the law would not force vendors to install this, there are certain car lines where a 5-star rating is considered essential to sales.
For years I have posed the following question at parties and salons:
By the 25th century, who will be the household names of the 20th century?
My top contender, Armstrong, has died today. I pick him because the best known name of the 15th century is probably Columbus, also known as the first explorer to a major location — even though he probably wasn’t the actual first.
Oddly, while we will celebrate him today and for years to come, Armstrong was able to walk down the street for the past few decades unlikely to be recognized in his own time. Though I had his photo on my wall as a child (along with Aldrin and Collins.) They were the only faces I ever put on my wall, my childhood heroes. I was not alone in this.
Unlike Columbus, who led his expedition, Armstrong was one of a very large team, the one picked for the most prominent role. He was no mere cog of course, and his flying made the difference in having a successful mission.
Others of the 15th century who are household names today are:
Henry V (thanks to Shakespeare, I suspect) and Richard III
Vlad the Impaler (thanks to legends)
Some artists (Bosch, Botticelli)
Amerigo Vespucci (only by virtue of getting two continents named after him)
As we see, some are famous by accident (writers etc. picked up their stories.) That may even be true for Jeanne d’Arc whose story would mostly only have been preserved in French lore.
The great inventors and scientists like Gutenberg and Leonardo give a clue to help. Guru Nanak founded a major religion but his name is not know well outside that religion.
So while many people suggest Hitler will be one of the names, I am more doubtful. I think it would be appropriate if his evil is forgotten, after all he wasn’t even the greatest butcher of the 20th century.
No, I think the fame will go to explorers and scientists, and possibly some artists from our time. We may not even know what names will be romantacised. Some candidates I suspect are:
Drexler or Feynman if nanotechnology as they envisioned it arrives
Crick and Watson (or even Venter) if control of DNA is seen as central
Von Neumann, Turing or others if computers are seen as the great invention of the 20th century (which they may be.)
It’s hard to say what music, writing, movies or other art will endure and be remembered. Did the 20th century get a Shakespeare?
What are your nominations? Of the people I list above, once agan all of them were capable of walking down the street without being recognized, just as Armstrong could. I suspect in the pre-camera days, so could Columbus and Gutenberg.
I’m watching the Olympics, and my primary tool as always is MythTV. Once you do this, it seems hard to imagine watching them almost any other way. Certainly not real time with the commercials, and not even with other DVR systems. MythTV offers a really wide variety of fast forward speeds and programmable seeks. This includes the ability to watch at up to 2x speed with the audio still present (pitch adjusted to be natural) and a smooth 3x speed which is actually pretty good for watching a lot of sports. In addition you can quickly access 5x, 10x, 30x, 60x, 120x and 180x for moving along, as well as jumps back and forth by some fixed amount you set (like 2 minutes or 10 minutes) and random access to any minute. Finally it offers a forward skip (which I set to 20 seconds) and a backwards skip (I set it to 8 seconds.)
MythTV even lets you customize these numbers so you use different nubmers for the Olympics compared to other recordings. For example the jumps are normally +/- 10 minutes and plus 30 seconds for commercial skip, but Myth has automatic commercial skip.
A nice mode allows you to go to smooth 3x speed with closed captions, though it does not feature the very nice ability I’ve seen elsewhere of turning on CC when the sound is off (by mute or FF) and turning it off when sound returns. I would like a single button to put me into 3xFF + CC and take me out of it.
Anyway, this is all very complex but well worth learning because once you learn it you can consume your sports much, much faster than in other ways, and that means you can see more of the sports that interest you, and less of the sports, commercials and heart-warming stories of triumph over adversity that you don’t. With more than 24 hours a day of coverage it is essential you have tools to help you do this.
I have a number of improvements I would like to see in MythTV like a smooth 5x or 10x FF (pre-computed in advance) and the above macro for CC/FF swap. In addition, since the captions tend to lag by 2-3 seconds it would be cool to have a time-sync for the CC. Of course the network, doing such a long tape delay, should do that for you, putting the CC into the text accurately and at the moment the words are said. You could write software to do that even with human typed captions, since the speech-recognition software can easily figure out what words match once it has both the audio and the words. Nice product idea for somebody.
Watching on the web
This time, various networks have put up extensive web offerings, and indeed on NBC this is the only way to watch many events live, or at all. Web offerings are good, though not quite at the quality of over-the-air HDTV, and quality matters here. But the web offerings have some failings read more »
A month ago I hosted Vernor Vinge for a Bay Area trip. This included my interview with Vinge at Google on his career and also a special talk that evening at Singularity University. In the 1980s, Vinge coined the term “the singularity” even though Ray Kurzweil has become the more public face of the concept of late.
He did not disappoint with an interesting talk on what he called “group minds.” He does not refer to the literal group minds that his characters known as the Tines have in his Zone novels, but rather all the various technologies that are allowing ordinary humans to collaborate in new ways that allow problems to be solved at a speed and scale not seen before. In puzzling over the various paths to the singularity — which means to him the arrival of an intelligence beyond our own — he and others have mostly put the focus on the creation of AI at human level and beyond. He points out that tools which use elements of AI to combine human thinking may generate a path to the singularity that is more probably benign.
In the talk he outlines a taxonomy of group minds, different ways in which they might form and exist, to help understand the space.
Any speaker or lecturer is familiar with a modern phenomenon. A large fraction of your audience is using their tablet, phone or laptop doing email or surfing the web rather than paying attention to you. Some of them are taking notes, but it’s a minority. And it seems we’re not going to stop this, even speakers do it when attending the talks of others.
However, while we have open wireless networks (which we shouldn’t) there is a trick that could be useful. Build a tool that sniffs the wireless net and calculates what fraction of the computers are doing something that suggests distraction — or doing anything on the internet at all.
While you could get creepy here and do internal packet inspection to see precisely what people are doing (for example, are they searching wikipedia for something you just talked about?) you don’t need to go that far. The simple fact that more people in the room are doing stuff on the internet, or doing heavy stuff on the internet is a clue. You can also tell when people are doing a few core functions, like web surfing vs. SMTP vs. streaming based on the port numbers they are going to. You can also tell if they are doing a common web-mail with the IP address. All of this works even if they are encrypting all their traffic like they should be (to stop prying tools like this!)
Only if they have set up a VPN (which they also should) will you be unable to learn things like ports and IP addresses, but again, it’s a nice indicator to know just what total traffic is, and how many different machines it’s coming from, and that will almost never be hidden.
When the display tells you that most of your audience is using the internet, you could pause and ask for questions or find out why they are surfing. The simple act of asking when distraction gets high will reduce it, and make people embarrassed to have done so. Of course, a sneaky program that learns the MACs of various students could result in the professor asking, “What’s so fascinating on the internet, Mr. Wilson?” At the very least it would encourage the people in the audience to use more encryption. But you don’t have to get that precise. The broad traffic patterns are plenty of information.
I’m here in Newport beach at the Transportation Research Board’s conference on self-driving vehicles. Today in a pre-session there was discussion of pre-robocar technologies and in particular applications of “managed lanes” and what the might mean for these technologies. Managed lanes are things like HOV/carpool lanes, HOT (carpool+toll), reversible lanes etc. Many people imagine these lanes would be used with pre-robocar technologies like convoys, super-cruise, cooperative ACC, Bus Rapid Transit etc.
As I’ve said before the first rule of robocars is “you don’t change the infrastructure.” First you must make the vehicles operate fully on the existing infrastructure. And people are doing that. But we can also investigate what happens next.
Robocars as many envision them do not thus need dedicated lanes, even though some of the simpler technologies might. Earlier we talked about electrification which is a pretty expensive adaptation. Let’s talk about high speed lanes.
Robocars (or any car) would be of much greater interest to people if they could go very fast in them. On one hand, the ability to work, read, watch video and possibly sleep in a robocar will mean to some that trip time is less important than comfort, and they might actually be happy with a slower trip with fewer disturbances. But sometimes a faster trip is very important, particularly on the long haul.
Today people are working hard to make robocars safe. Eventually they should be able to make them safe even at higher speeds, particularly on freeways that were designed for fairly high speeds. Even human drivers routinely see over 100mph on the autobahns of Germany. Problem is if you want to go 120mph outside of Germany , there’s no road you can easily do that on. The other cars, going 65 to 80mph in the fast lane will get in the way, creating an uncomfortable ride and possibly dangerous situations.
Many of today’s “managed lanes” are primarily for use in rush hour, from 5am to 9am and 3pm to 7pm. In other hours, traffic is very light. What if that special lane does not just become an ordinary lane after rush hour, but instead is converted to another special purpose. There are a lot of different technologies that might be able to become viable with such a lane.
The most interesting one to me is high speed. If the carpool lane switched to being the high-speed-car lane at 9:30am, I actually think a lot of people might very well delay their commutes and shift their hours. A one-hour commute at 8am or a 15 minute trip at 9:30am — not a hard choice for many. And lots of people travel mid-day for various purposes.
The high-speed lane would actually mandate a minimum speed, perhaps 100mph when the road is clear. To get in this lane you would need a car that is certified safe at that speed or above. This might be a robocar, but it might also be a human-driven car with sufficient driver-assist technologies to certify it safe at that speed. The lane would probably only be open in good weather, and would probably revert to ordinary status in the event the main road got congested for whatever reason. Vehicles in the lane would have to be connected vehicles, ready to receive signals about changes to the dynamic status of the lane.
There probably would also be a requirement for efficient vehicles. Wind drag at 120mph costs 4 times as much fuel per mile as wind drag at 60mph. These cars would have to be highly aerodynamic designs. They might also be capable of platooning to further reduce drag, though you would want to wait a while to assure safety before platooning at 120mph. You might insist on alternate fuels or even that they be electric vehicles or other low emission vehicles. It doesn’t matter — I think there are a lot of people who would pay a lot of money to be able to go 120mph.
The lanes in general would need to be separated from the main lanes. Most carpool lanes are already like that, though most of the ones in the SF Bay Area are not this style. Ideally they would be the style that even has a special merge lane at the points where entry and egress from the main lanes are possible.
If such a program were a success we could see more. For example, one could imagine adding an extra lane to Interstate 5 in the California central valley and have it be a high speed lane most of the time. The planned California High Speed Rail, which probably will never be finished, is forecast to cost $68 Billion. 2 extra lanes on I-5 in the central valley south of Sacramento would cost well under a billion, and offer fairly high speed travel to those in the valley — faster door to door than the HSR. And my calculations even suggest that aerodynamic electric vehicles would use less energy per passenger-mile than the HSR. (Definitely if they are shared by as few as 2-3 people or when designed for a platoon.) These teardrop-shaped cars would also be much more efficient than today’s cars when they slow down and ply the ordinary highways and streets.
It is not trivial to go 120mph in a robocar though. Your sensors must be long range so you can stop if they see something. If you want to build infrastructure, here is where the road might have sensors which can report on road obstacles and other vehicles to assure safety. If you’re building a whole high speed lane this is not an issue. The first rule of robocars is written to avoid needing new infrastructure to do ordinary driving and get most places — not to prevent you from taking advantage of new spending that justifies itself.
An MIT team has been working on a car that is “hard to crash.” Called the intelligent co-pilot it is not a self-driving car, but rather a collection of similar systems designed to detect if you are about to hit something and try to avoid it. To some extent, it actually wrests control from the driver.
When I first puzzled over the roadmap to robocars I proposed this might be one of the intermediary steps. In particular, I imagined a car where, in a danger situation, the safest thing to do is to let go of the wheel and have the car get you to a safe state. This car goes further, actually resisting you if you try to drive the car off the road or towards an obstacle.
This is a controversial step, and the reasons are understood by the MIT team. First of all, from a legal liability standpoint, vendors are afraid of overriding the human. If a person is in control of a vehicle and makes a mistake, they are liable. If a machine takes over and saves the day, it’s great, but if the machine takes over and there is an accident — an accident the human could have avoided — there could be high risks to the maker of the machine as well as the occupant. In most designs, the system is set up so that the human has the opportunity for control at all times.
Actually, it’s even worse. A number of car makers are building freeway autopilots which still require attention from the driver in case the lane markers disappear or other problems ensue. One way some of them have built this is to require the driver to touch the wheel every so often to show they are alert. They will beep if the driver does not touch the wheel, and they will even disengage if the driver waits for too long after the beep. Consider what the companies have interpreted the liability system to require: That the right course of action, when the system is driving and the driver has her hands off the wheel, is to disengage and let the vehicle wander freely and possibly careen off the road! Of course, they don’t want the vehicle to do that, but they want to make it clear to the driver that they can’t depend on the system, can’t decide to type a long E-mail while it is running.
And this relates to the final problem of human accommodation. When a system makes people safer, they compensate by being more reckless. For example, anti-lock brakes are great and prevent wheel lock-up on slippery roads — but they cause drivers to feel they have invincible brakes and studies show they drive more aggressively because of them. Only a safe robocar avoids this problem; its decisions will always be based on a colder analysis of the situation.
A hard-to-crash car is still a very good idea. Before a full robocar is available, it can make a lot of sense, particularly for aging people and teens with new licences. But it may never come to market due to liability concerns.
Vernor Vinge is perhaps the greatest writer of hard SF and computer-related SF today. He has won 5 Hugo awards, including 3 in a row for best novel (nobody has done 4 in a row) and his novels have inspired many real technologies in cyberspace, augmented reality and more.
I invited him up to speak at Singularity University but before that he visited Google to talk in the Authors@Google series. I interview him about his career and major novels and stories, including True Names, A Fire Upon the Deep, Rainbow’s End and his latest novel Children of the Sky. We also talk about the concept of the Singularity, for which he coined the term.
There have been experiments with dedicated lanes in the past, including a special automated lane back in the 90s in San Diego. The problem is much easier to solve (close to trivial by today’s standards) if you have a dedicated lane, but this violates the first rule of robocars in my book — don’t change the infrastructure.
Aside from the huge cost of building the dedicated lanes, once you have built a lane you now have a car which can only drive itself in that dedicated lane. That’s a lot less valuable to the customer, effecitvely you are only get customers who happen to commute on that particular route, rather than being attractive to everybody. And you can’t self-drive on the way to or from the highway, so it is not clear what they mean when they say the driver sets a destination, other than perhaps the planned exit.
Yes, the car is a lot cheaper but this is a false economy. Robocar sensors are very expensive today but Moore’s law and volume will make them cheaper and cheaper over time. Highway lanes are not on any Moore’s law curve, in fact they are getting more expensive with time. And if the lane is dedicated, that has a number of advantages, though it comes with a huge cost.
Of course, today, nobody has a robocar safe enough to sell to consumers for public streets. But I think that by the early 2020s, when this study might recommend completing a highway, the engineers would open up the new lane and find that while it’s attractive for its regular nature and especially attractive if it is restricted and thus has lighter and more regular traffic, the cars are already able to drive on the regular lanes just fine.
A better proposal, once robocars start to grow in popularity, would be to open robocar lanes during rush hour, like carpool lanes. These lanes would not be anything special, though they would feature a few things to make the car’s job easier, such as well maintained markings, magnets in the road if desired, no changes in signage or construction without advance notice etc. But most of all they would be restricted during rush hour so that cars could take advantage of the smooth flow and predictable times that would come with all cars being self-driving. Unless humans kept taking over the cars and braking when they got scared or wanted to look at an accident in the other lanes, these lanes would be metered and remain free of traffic jams. However, you need enough robocar flow to justify them since if you only use half the capacity of a lane it is wasteful. On the other hand, such lanes could be driven by the more common “super cruise” style cars that just do lane following and ACC.
Hats off to the video embedded below, which was prepared for a futuristic transportation expo in my home town of Toronto.
Called the PAT (People and Things) this video outlines the UI and shows a period in the day of a robotic taxi/delivery vehicle as it moves around Toronto picking up people and packages.
I first learned about the video from a new blog on the subject of consumer self driving cars — as far as I know the second serious one to exist after this one. The Driverless Car HQ started up earlier this year and posts with a pretty solid volume. They are more comprehensive in posting various items that appear in the media than I am, and cover some areas I don’t, so you may want to check them out. (That’s a conscious choice on my part, as I tend not to post links to stories that I judge don’t tell us much new. An example would be that the SARTRE road train just did a demo in Spain last month, but it was not much different from demos they had done before.)
Of course, as I said earlier, sadly “Driverless Car” is one of my least favourite terms for this technology, but that doesn’t impede the quality of the blog. In addition, while I do report news on the Google car on this blog, I tend to refrain from commentary due to being on that team, and the folks at DCHQ are not constrained this way.
Face recognition of passengers as they approach the car
Automatic playing of media for the passengers (apparently resuming from media paused earlier in some cases)
Doing package delivery work when needed
Self-cleaning after each passenger
Optional ride-share with friends
In-car video conferencing on the car’s screens
Offering the menu of a cafe which is the destination of a trip. (Some suspect this is a location-based ad spam, but I think it’s a more benign feature because the passenger is picking up his ride-share friend at the cafe.)
And the UIs are slick, if a bit busy, and nicely done.
The concept vehicle at the Brickworks is fairly simple but does present some ideas I have felt are worthwhile, such as single passenger vehicles, face to face seating etc. It’s a bit too futuristic, and not aerodynamic. In the concept, it adjusts for the handicapped. I actually think that’s the reverse of what is likely to happen. Rather than making all cars able to meet all needs, it makes more sense to me to have specialized cars that are cheaper and more cost effective at their particular task, and have dedicated (more expensive) vehicles for wheelchairs. (For example, I like the hollow vehicles like the Kenguru.) I think you serve the disabled better for the same money by having these specialized vehicles — the wait may be slightly longer, but the vehicle can be much better at serving the individual’s needs.
Ford, which has already touted the value of robocars, has announced plans to do a traffic-assist autopilot system sometime mid-decade. Ford joins Mercedes, VW/Audi and Cadillac in announcing such systems. Ford’s vehicle will also offer automatic parking in perpendicular parking spots. For some time many cars have offered automated parallel parking. Since most people do not find perpendicular parking all that difficult, perhaps their goal here is very tight spaces (though that would require getting out of the car and blocking the rude driver, which I have found out only gets your car vandalized) or possibly parking in a personal garage that is very thin.
AUVSI and Mercedes
On the negative front, Mercedes appears to have backed off their plan to offer a traffic jam assistant in the 2013 S class. Earlier in June I attended the AUVSI “Driverless Car Summit” in Detroit, and Mercedes indicated that while they do have that technology in their F.800 concept car, this is only a prototype. As currently set up, the Mercedes system requires you to touch the wheel every 8 seconds. Honda was promoting this in 2006. Mercedes also showed their “6D” stereo vision based system which demonstrated impressive object tracking. They also claimed it does as well in differing light conditions, which would be a major breakthrough.
Some other notes from the conference:
There was effectively universal hate for the term “driverless car.” I join the haters, since the car has a driver, but it’s a computer. No other term won big support, though.
While AUVSI is about unmanned military vehicles, they put on a nicely demilitarized conference, which was good.
There were still a lot of fans of DSRC (a car data radio protocol) and V2V communications. Some from that community have now realized they went down the wrong path but a lot had made major career investments and will continue to push it, including inside the government.
The NHTSA is doing a research project on how they might regulate safety standards. They have not laid out a strategy but will be looking at sensor quality, low level control system squality, UI for the handoff between manual and self-driving and testing methodology.
I liked Mercedes’ terms for various modes of self-driving: Feet off, Hands off, Eyes off and Body out. The car companies are aiming at hands off, Google is working on Eyes Off but Body out (which means being so good that the car can operate without anybody in it or without any attention from the occupant) is the true robocar and the long term goal for many but not all projects.
Continental showed more about their own cruising system that combines lane-keeping and automatic cruise-control. They now say they have the 10,000 miles of on-road testing needed for the Nevada testing licence, but have not yet decided if they will get one. There is some question is what they are doing requires a licence under the Nevada regulations. (I suspect it does not.) However, they were quizzed as to whether they were testing in Nevada without a licence, which they deny. Continental says their system is built entirely from parts that will be “production parts” as of early 2013.
Legal and states panels showed progress but not too much news. States seem to be pleased so far.
The National Federation for the Blind showed off their blind driving challenge. They have become keen on building a car which has enough automation for a blind person to operate but still uses the blind driver’s skills (such as hearing and thinking) to make the task possible. This is an interesting goal for the feeling of autonomy, but I suspect it is more likely they will just get full-auto cars sooner, and they accept this is likely.