I’m back from CES, and there was certainly a lot of press over two pre-robocar announcements there:
The first was the Toyota/Lexus booth, which was dominated by a research car reminiscent of the sensor-stacked vehicles of the DARPA grand challenges. It featured a Velodyne on top (like almost all the high capability vehicles today) and a very large array of radars, including six looking to the sides. Toyota was quite understated about the vehicle, saying they had low interest in full self-driving, but were doing this in order to research better driver assist and safety systems.
The Lexus booth also featured a car that used ultrasonic sensors to help you when backing out of a blind parking space.
These sensors let you know if there is somebody coming down the lane of the parking lot.
Audi did two demos for the press which I went to see. Audi also emphasized that this is long-term concept stuff, and meant as research work to enhance their “driver in the loop systems.” They are branding these projects “Piloted Parking” and “Piloted Driving” to suggest the idea of an autopilot with a human overseer. However, the parking system is unmanned, and was demonstrated in the lot of the Mandarin Oriental. The demo area was closed off to pedestrians, however.
The parking demo was quite similar to the Junior 3 demo I saw 3 years ago, and no surprise, because Junior 3 was built at the lab which is a collaboration between Stanford and VW/Audi. Junior 3 had a small laser sensor built into it. Instead, the Piloted Parking car had only ultransonic sensors and cameras, and relied on a laser mounted in the parking lot. In this appraoch, the car has a wifi link which it uses to download a parking lot map, as well as commands from its owner, and it also gets data from the laser. Audi produced a mobile app which could command the car to move, on its own, into the lot to find a space, and then back to pick up the owner. The car also had a slick internal display with pop-up screen.
The question of where to put the laser is an interesting one. In this approach, you only park in lots that are pre-approved and prepared for self-parking. Scanning lasers are currently expensive, and if parking is your only
application, then there are a lot more cars then there are parking lots and it might make sense to put the expensive
sensor in the lots. However, if the cars want to have the laser anyway for driving it’s better to have the sensor
in the car. In addition, it’s more likely that car buyers will early adopt than parking lot owners.
In the photo you see the Audi highway demo car sporting the Nevada Autonomous Vehicle testing licence #007. Audi announced they just got this licence, the first car maker to do so. This car offers “Piloted Driving” — the driver must stay alert, while a lane-keeping system steers the car between the lane markers and an automatic cruise control maintains distance from other cars. This is similar to systems announced by Mercedes, Cadillac, VW, Volvo and others. Audi already has announced such a system for traffic jams — the demo car also handled faster traffic.
Audi also announced their use of a new smaller LIDAR sensor. The Velodyne found on the Toyota car and Google cars is a large, roof-mounted device. However, they did not show a car using this sensor.
Audi also had a simulator in their booth showing a future car that can drive in traffic jams, and lets you take a video phone call while it is driving. If you take control of the car, it cuts off the video, but keeps the audio. read more »
Happy 2013: Here are some articles I bookmarked last year that you may find of interest this year.
An NBC video about the AutonoMOUS team in Berlin which is one of the more advanced academic teams, featuring on-street driving, lane-changes and more.
An article about “dual mode transport” which in this case means all sorts of folding bikes and scooters that fit into cars. This is of interest both as competition to robocars (you can park remotely and scoot in, competing with one of the robocar benefits) and interesting if you consider the potential of giving limited self-driving to some of these scooters, so they can deliver themselves to you and you can take one way trips. The robocar world greatly enables the ability to switch modes on different legs of a trip, taking a car on one leg, a bike on another, a subway on a 3rd and a car back home. Now add a scooter for medium length trips.
While I’ve pointed to many videos and sources on the Google car, rather than talk about it myself, if you want a fairly long lecture, check out this talk by Sebastian Thrun at the University of Alberta.
The Freakonomics folks have caught the fever, and ask the same question I have been asking about why urban and transportation planners are blind to this revolution in their analysis
You may have read my short report last year on the Santa Clara Law conference on autonomous vehicles. The Law Review Issue is now out with many of those papers. I found the insurance and liability papers to be of of the most use — so many other articles on those topics miss the boat.
It’s been a while since I’ve done a major new article on long-term consequences of Robocars. For some time I’ve been puzzling over just how our urban spaces will change because of robocars. There are a lot of unanswered questions, and many things could go both ways. I have been calling for urban planners to start researching the consequences of robocars and modifying their own plans based on this.
While we don’t know enough to be sure, there are some possible speculations about potential outcomes. In particular, I am interested in the future of the city and suburb as robocars make having ample parking less and less important. Today, city planners are very interested in high-density development around transit stops, known as “transit oriented development” or TOD I now forecast a different trend I will call ROD, or robocar oriented development.
For a view of how the future of the city might be quite interesting, in contrast to the WALL-E car-dominant vision we often see.
Earlier I wrote an essay on robocar changes affecting urban planning which outlined various changes and posed questions about what they meant. In this new essay, I propose answers for some of those questions. This is a somewhat optimistic essay, but I’m not saying this is a certain outcome by any means.
As always, while I do consult for Google’s project, they don’t pay me enough to be their spokesman. This long-term vision is a result of the external work found on this web site, and should not be taken to imply any plans for that project.
There’s been much debate in the USA about High Speed Rail (HSR) and most notably the giant project aimed at moving 20 to 24 million passengers a year through the California central valley, and in particular from downtown LA to downtown San Francisco in 2 hours 40 minutes.
There’s been big debate about the projected cost ($68B to $99B) and the inability of projected revenues to cover interest on the capital let alone operating costs. The project is beginning with a 130 mile segment in the central valley to make use of federal funds. This could be a “rail to nowhere” connecting no big towns and with no trains on it. By 2028 they plan to finally connect SF and LA.
The debate about the merits of this train is extensive and interesting, but its biggest flaw is that it is rooted in the technology of the past and present day. Indeed, HSR itself is around 50 years old, and the 350 kph top speed of the planned line was attained by the French TGV over 30 years ago.
The reality of the world, however, is that technology is changing very fast, and in some fields like computing at an exponential rate. Transportation has not been used to such rapid rates of change, but that protection is about to end. HSR planners are comparing their systems to other 20th century systems and not planning for what 2030 will actually hold.
At Singularity University, our mission is to study and teach about the effects of these rapidly changing technologies. Here are a few areas where new technology will disrupt the plans of long-term HSR planners:
Cars that can drive and deliver themselves left the pages of science fiction and entered reality in the 2000s thanks to many efforts, including the one at Google. (Disclaimer: I am a consultant to, but not a spokesman for that team.)
Readers of my own blog will know it is one of my key areas of interest.
By 2030 such vehicles are likely to be common, and in fact it’s quite probable they will be able to travel safely on highways at faster speeds than we trust humans to drive. They could also platoon to become more efficient.
Their ability to deliver themselves is both boon and bane to rail transit. They can offer an excellent “last/first mile” solution to take people from their driveways to the train stations — for it is door to door travel time that people care about, not airport-to-airport or downtown-to-downtown. The HSR focus on a competitive downtown-to-downtime time ignores the fact that only a tiny fraction of passengers will want that precise trip.
Self-delivering cars could offer the option of mobility on demand in a hired vehicle that is the right vehicle for the trip — often a light, efficient single passenger vehicle that nobody would buy as their only car today. These cars will offer a more convenient and faster door-to-door travel time on all the modest length trips (100 miles or less) in the central valley. Because the passenger count estimates for the train exceed current air-travel counts in the state, they are counting heavily on winning over those who currently drive cars in the central valley, but they might not win many of them at all.
The cars won’t beat the train on the long haul downtown SF to downtown LA. But they might well be superior or competitive (if they can go 100mph on I-5 or I-99) on the far more common suburb-to-suburb door to door trips. But this will be a private vehicle without a schedule to worry about, a nice desk and screen and all the usual advantages of a private vehicle.
Improved Air Travel
The air travel industry is not going to sit still. The airlines aren’t going to just let their huge business on the California air corridor disappear to the trains the way the HSR authority hopes. These are private companies, and they will cut prices, and innovate, to compete. They will find better solutions to the security nightmare that has taken away their edge, and they’ll produce innovative products we have yet to see. The reality is that good security is possible without requiring people arrive at airports an hour before departure, if we are driven to make it happen. And the trains may not remain immune from the same security needs forever.
On the green front, we already see Boeing’s new generation of carbon fiber planes operating with less fuel. New turboprops are quiet and much more efficient, and there is more to come.
The fast trains and self-driving cars will help the airports. Instead of HSR from downtown SF to downtown LA, why not take that same HSR just to the airport, and clear security while on the train to be dropped off close to the gate. Or imagine a self-driving car that picks you up on the tarmac as you walk off the plane and whisks you directly to your destination. Driven by competition, the airlines will find a way to take advantage of their huge speed advantage in the core part of the journey.
Self-driving cars that whisk people to small airstrips and pick them up at other small airstrips also offer the potential for good door-to-door times on all sorts of routes away from major airports. The flying car may never come, but the seamless transition from car to plane is on the way.
We may also see more radical improvements here. Biofuels may make air travel greener, and lighter weight battery technologies, if they arrive thanks to research for cars, will make the electric airplane possible. Electric aircraft are not just greener — it becomes more practical to have smaller aircraft and do vertical take-off and landing, allowing air travel between any two points, not just airports.
These are just things we can see today. What will the R&D labs of aviation firms come up with when necesessity forces them towards invention?
Rail technology will improve, and in fact already is improving. Even with right-of-way purchased, adaptation of traditional HSR to other rail forms may be difficult. Expensive, maglev trains have only seen some limited deployment, and while also expensive and theoretical, many, including the famous Elon Musk, have proposed enclosed tube trains (evacuated or pneumatic) which could do the trip faster than planes. How modern will the 1980s-era CHSR technology look to 2030s engineers?
Decades after its early false start, video conferencing is going HD and starting to take off. High end video meeting systems are already causing people to skip business trips, and this trend will increase. At high-tech companies like Google and Cisco, people routinely use video conferencing to avoid walking to buildings 10 minutes away.
Telepresence robots, which let a remote person wander around a building, go up to people and act more like they are really there are taking off and make more and more people decide even a 3 hour one-way train trip or plane trip is too much. This isn’t a certainty, but it would also be wrong to bet that many trips that take place today just won’t happen in the future.
Like it or not, in many areas, sprawl is increasing. You can’t legislate it away. While there are arguments on both sides as to how urban densities will change, it is again foolish to bet that sprawl won’t increase in many areas. More sprawl means even less value in downtown-to-downtown rail service, or even in big airports. Urban planners are now realizing that the “polycentric” city which has many “downtowns” is the probable future in California and many other areas.
That Technology Nobody Saw Coming
While it may seem facile to say it, it’s almost assured that some new technology we aren’t even considering today will arise by 2030 which has some big impact on medium distance transportation. How do you plan for the unexpected? The best way is to keep your platform as simple as possible, and delay decisions and implementations where you can. Do as much work with the knowledge of 2030 as you can, and do as little of your planning with the knowledge of 2012 as you can.
That’s the lesson of the internet and the principle known as the “stupid network.” The internet itself is extremely simple and has survived mostly unchanged from the 1980s while it has supported one of history’s greatest whirlwinds of innovation. That’s because of the simple design, which allowed innovation to take place at the edges, by small innovators. Simpler base technologies may seem inferior but are actually superior because they allow decisions and implementations to be delayed to a time when everything can be done faster and smarter. Big projects that don’t plan this way are doomed to failure.
None of these future technologies outlined here are certain to pan out as predicted — but it’s a very bad bet to assume none of them will. California planners and the CHSR authority need to do an analysis of the HSR operating in a world of 2030s technology and sprawl, not today’s.
While there had been many rumous that Mercedes would introduce limited self-driving in the 2013 S-class, that was not to be, however, it seems plans for the 2014 S-class are much more firm. This car will feature “steering assist” which uses stereo cameras and radar to follow lanes and follow cars, along with standard ACC functions. Reportedly it will operate at very high speeds.
There’s also a nice article on the Mercedes test facility. They are well known for their interesting test facilities, and this one uses an inflatable car being towed on a test track, making it safe to hit the car if there is a problem.
Media sources are also reporting that Google (disclaimer, they are a client) has hired Ron Medford, deputy director of the National Highway Transportation Safety Agency, which sets the vehicle safety standards and is currently researching how to certify self-driving cars.
I’m on the board of the Foresight Institute, which at over 25 years old has been promoting nanotech since long before people knew the word. This January, we will be holding our technical conference on nanotechnology and related fields. Foresight’s focus is on the potential for molecular manufacturing — doing things at the atomic level — and not simply on fine structure materials.
It may surprise you just how much research is going on in the field of atomically precise manufacturing, and the positive results that are coming from it. Today people (including me) are excited by 3-D printers that can reproduce macroscopic shapes with good precision, but the holy grail is to build structures at the atomic level, as it has the potential to produce anything that can be formed, cheaply and in small volumes.
Foresight hosts two conferences — the other is a more general futurist conference on the implications of these technologies, while this one offers the results of in-depth research. Check out the program page for a list of speakers including Fraser Stoddart, George Church, John Randall, William Goddard and many others.
Update: Blog readers can get a $100 discount on registration with this code: 2013QDFP
In the wake of the election, the big nerd story is the perfect stats-based prediction that Nate Silver of the 538 blog made on the results in every single state. I was following the blog and like all, am impressed with his work. The perfection gives the wrong impression, however. Silver would be the first to point out he predicted Florida as very close with a slight lean for Obama, and while that is what happened, that’s really just luck. His actual prediction was that it was too close to call. But people won’t see that, they see the perfection. I hope he realizes he should try to downplay this. For his own sake, if he doesn’t, he has nowhere to go but down in 2014 and 2016.
But the second reason is stronger. People will put even more faith in polls. Perhaps even not faith, but reasoned belief, because polls are indeed getting more accurate. Good polls that are taken far in advance are probably accurate about what the electorate thinks then, but the electorate itself is not that accurate far in advance. So the public and politicians should always be wary about what the polls say before the election.
Silver’s triumph means they may not be. And as the metaphorical Heisenberg predicts, the observations will change the results of the election.
There are a few ways this can happen. First, people change their votes based on polls. They are less likely to vote if they think the election is decided, or they sometimes file protest votes when they feel their vote won’t change things. Vice versa, a close poll is one way to increase turnout, and both sides push their voters to make the difference. People are going to think the election is settled because 538 has said what people are feeling.
The second big change has already been happening. Politicians change their platforms due to the polls. Danny Hillis observed some years ago that the popular vote is almost always a near tie for a reason. In a two party system, each side regularly runs polls. If the polls show them losing, they move their position in order to get to 51%. They don’t want to move to 52% as that’s more change than they really want, but they don’t want to move to less than 50% or they lose the whole game. Both sides do this, and to some extent the one with better polling and strategy wins the election. We get two candidates, each with a carefully chosen position designed to (according to their own team) just beat the opposition, and the actual result is closer to a random draw driven by chaotic factors.
Well, not quite. As Silver shows, the electoral college stops that from happening. The electoral college means different voters have different value to the candidates, and it makes the system pretty complex. Instead of aiming for a total of voters, you have to worry that position A might help you in Ohio but hurt you in Florida, and the electoral votes happen in big chunks which makes the effect of swing states more chaotic. Thus poll analysis can tell you who will win but not so readily how to tweak things to make the winner be you. The college makes small differences in overall support lead to huge differences in the college.
In Danny’s theory, the two candidates do not have to be the same, they just have to be the same distance from a hypothetical center. (Of course to 3rd parties the two candidates do tend to look nearly identical but to the members of the two main parties they look very different.)
Show me the money?
Many have noted that this election may have cost $6B but produced a very status quo result. Huge money was spent, but opposed forces also spent their money, and the arms race just led to a similar balance of power. Except a lot of rich donors spent a lot of their money, got valuable access to politicians for it, and some TV stations in Ohio and a few other states made a killing. The fear that corporate money would massively swing the process does not appear to have gained much evidence, but it’s clear that influence was bought.
I’m working on a solution to this, however. More to come later on that.
While there have been some fairly good ballot propositions (such as last night’s wins for Marijuana and marriage equality) I am starting to doubt the value of the system itself. As much as you might like the propositions you like, if half of the propositions are negative in value, the system should be scrapped. Indeed, if only about 40% are negative, it should still be scrapped because of the huge cost of the system itself. read more »
Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book “Bowl of Heaven.” Here’s a Youtube video of my session. They did a review of the history of SF about “big dumb objects” — stories like Niven’s Ringworld, where a huge construct is a central part of the story.
On Tuesday I stopped by the Atlantic’s Big Science day where Chris Gerdes of Stanford’s CARS centre announced results of their race between a robocar and humans on a racetrack. The winner — the humans, but only by a small margin. The CARS team actually studied human driver actions to program their car, but found the human drivers have gotten very good at squeezing most of the available performance out of the vehicles, leaving little room for the robot to improve.
Another result reaffirmed studies of passenger reactions. People taken through the first lap were quite scared, but by the next lap they relaxed and gained confidence in the system. This result shows up time and time again, and has convinced me that while many people tell me they think robocars will not become popular because people will be too scared to ride them, those people are wrong, even about their own behaviour. Most of them, at least.
Also on the Stanford front, Bryant Walker Smith, who has decided to make robocar law a specialty, has released an analysis of the legality of robocars in the USA. The conclusion — robocars which have a human occupant who can take the wheel in the event of a problem are probably legal in almost all states, not just the states that have explicitly made them legal.
DARPA humanoid robot contest includes driving
DARPA ran the 3 grand challenges for robocars but stopped in 2007 after the urban challenge. Their latest challege contest involves making humanoid robots, but the DARPA Robotics Challenge includes a phase where your robot should be able to do a variety of tasks on rough terrain, including getting into a car and driving it. There are 4 tracks to the challenge. 3 are in the physical world, with either provided robots or team-built robots. The 4th is in the virtual world which will allow smaller teams to compete without the cost of working with a physical robot. I have written before about the opportunities of a robocar simulator for testing and contests, and so I am eager to see how this simulator develops.
Research in China has advanced. The National Natural Science Foundation has announced the goal of diong a short drive near Beijing and finally a long trip all the way to Shenzhen, 2400km away. This project primarily uses vision and radar, so it will be interesting to see if they can do this reliably without lasers.
Three big automaker announcements — and not about V2V even though the ITS World Congress is going on this week.
First, Nissan, whose self-parking Leaf I just wrote about has also announced a steer-by-wire system and tests of a car that will swerve to avoid a sudden obstacle. Of course almost all cars have power steering, but in a steer-by-wire car there is no mechanical linkage by default from the steering wheel to the steering motors. This allows a wheel to have “software defined feel” and is good for eventual robocars. In such cars a fail-safe restores a mechanical link if the main system fails.
However, the swerving car, which is demonstrated avoiding a cardboard pedestrian which jumps out into the road, is a new level of technology for major car makers. (Braking for these obstacles has been done for a while.)
Not much later, Jerusalem company MobilEye announced they had converted an Audi A7 to self-drive using 5 of their cameras as well as radar. MobilEye makes the vision system found in a lot of different cars — their specialty is a dedicated chip for vision processing. This article, which is in Hebrew outlines the car, which cost 588K NIS to build.
Volvo, which uses MobilEye, announced today that their 2014 cars would feature a traffic jam assist. Several companies have announced traffic jam assist (which is a low speed lane-keeping plus ACC) but Volvo has put a firm date on it. Also new in Volvo’s system is doing more than following lane markers — it also swerves to follow the car in front of it, as long as that car stays in the lane.
Of course, this does leave open the question of what happens if 2 or more of these start following one another, but that’s some time in the future and they have time to work on it.
In other news, the NHTSA has announced a grant to a team at Virginia Tech to research safety standards for robocar user interfaces. They have in the past stated they think the handoff between manual and automatic is an important safety function they might regulate.
And yes, there is lots of V2V news from the ITS world congress, but my skepticism for most forms of V2V remains high.
Nissan is showing a modified Leaf able to do “valet” park in a controlled parking lot. The leaf downloads a map of the lot, and then, according to Nissan engineers, is able to determine its position in the lot with 4 cameras, then hunt for a spot and go into it. We’ve seen valet park demonstrations before, but calculating position entirely with cameras is somewhat new, mainly because of the issues with how lighting conditions vary. In an indoor parking garage it’s a different story, and camera based localization under the constant lighting should be quite doable.
This other video from Engadget with a more detailed demo shows the view from the car’s cameras, which appear to be on the side mirrors as well as front and back for a synthetic 360 degree view. They also have an Android app for control and the ability to view through the cameras. Alas, chances are low you would get that bandwidth in the parking garage, but it’s a cool demo.
There was a huge raft of press coverage after last week’s signing of the California law. This ranged from polls showing strong acceptance of the tech to editorial critiques about the law being too fanciful or the technology taking jobs. (It is true that there will be job displacement, but at the same time, Americans spend about 50 billion hours driving which is a much larger sink on the GDP.)
Tonight I will be on a panel at the Palo Alto International Film Festival at 5pm. Not on robocars, but on the role of science fiction in movies in changing the world. (In a past life, I published science fiction and am on this panel by virtue of my faculty position at Singularity University.)
A follow-up thought about yesterday’s shuttle fly-by and panorama. I was musing, might this be perhaps the most photographed single thing in human history to date?
Here’s the reasoning. Today there are more cameras and more photographers than ever, and people use them all the time in a way that continues to grow. To be a candidate for a most-photographed event, you would need to be recent, and you would need to take place in front of a ton of people, ideally with notice. It seemed like just about everybody in Sacramento, the Bay Area and LA was out for this and holding up a phone or camera.
Of course, many objects are more photographed, like the Golden Gate Bridge the shuttle flew over, but I’m talking here of the event rather than the object. Of course this is an event which moved over the course of thousands of miles.
The other shuttle fly-overs done over New York and Washington — also with large populations
Total eclipses of the sun which go over highly populated areas. The 2009 eclipse went over Shanghai, Varanasi and many other hugely populated areas but was clouded out for many. Nobody has yet to make a photo of an eclipse that looks like an eclipse, of course — I’ve seen them all, including many of the clever HDRs and overlays — but that doesn’t stop people from trying.
The 1999 eclipse did go over a number of large European cities, but this was before the everybody-is-photographing era
Most lunar eclipses are seen by as much as half the world, though they are hard to photograph with consumer camera gear, and only a fraction of people go out to watch and photograph them, but they could easily be a winner.
Prior to the digital era, a possible winner might be the moon landing. Back in 1969, every family had a camera, though usage wasn’t nearly what it is today. However, I remember the TV giving lessons on how to photograph a TV screen. Everybody was shooting their TV for the launches and the walk on the moon. Terrible pictures (much like early camera phone pictures) but people took them to be a part of the event. I recall taking one myself though I have no idea where it is.
Of course there may be objective ways to measure this today, by tracking the number of photos on photo sharing and social sites, and extrapolating the winner. If the shuttle is the winner for now, it won’t last long. Photography is going to grow even more.
I should also note that remote photography, like we did for Apollo, is clearly much larger, in the form of recording video. For those giant events viewed by billions — World Cup, Olympics, Oscars etc. — huge numbers of people are recording them, at least temporarily.
Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo. We all figured the plane would come along the airstrip, but they surprised us, having it fly a bit to the west so it suddenly appeared from behind the skeleton of Hangar One, the old dirigible hangar. That turned out to be bad for my photography, as I didn’t get much advance notice, and the shot of the crowd I had done a few minutes before had everybody expectantly looking along the runway, and not towards the west where the plane and shuttle appear in my photo.
However, it did make for a very dramatic arrival. So while different parts of this shot are at slightly different times, it does capture the scene of Moffett field and the crowd awaiting the shuttle, and its arrival. I do however have a nice hi-res photo for you to enjoy as well as the panoramic shot of the Endeavour shuttle fly-by.
Tomorrow (Wed Sep 19) I will give a robocars talk at Dorkbot SF in San Francisco. Dorkbot is a regular gathering of “People doing strange things with electricity” and there will be two other sessions.
Last week, the SARTRE project announced it was concluding after a long period of work on highway platooning. Volvo lead the project which demonstrated platoons on test tracks and on some real roads. They also did a number of worthwhile users studies in simulation.
People have been interested in platooning for a while. The main upsides they are looking for are:
It’s much easier than a Robocar — the platoon is lead by a truck with a professional driver who handles everything with human intelligence
Putting the car at short spacings can result in a huge increase in highway capacity, though you tend to want somewhat larger headways around the convoys
There is fuel saving — about 10% or so for the lead vehicle, and up to 30% for following vehicles, at spacings of about 4 to 6m. This is not quite as much as people hoped but it is real.
The equipment in the following cars is simple — V2V radios and possibly some radar for backup.
Unfortunately, platooning comes with some downsides as well
If you have an accident, it can be catastrophic as you might crash a whole convoy of vehicles.
Non-platoon drivers may interfere with the convoy. The gaps must be kept small enough that nobody tries to enter them. A non-member in the middle of the convoy is bad news. You need small gaps to save fuel too.
Trucks must go only at the front of the convoy due to their longer stopping distance. New trucks must insert in the middle. Cars can insert more easily at the end of the convoy.
Convoys in the right lane can make it harder for people merging, and in general they can present a barrier to traffic.
Driving with a short gap is disconcerting. Behind a truck, you can’t even see the lane markers.
In rain, your windshield gets completely washed out with spray (and sometimes salt spray) which is even more disconcerting.
Following cars get hit by small stones and debris from the forward vehicle. After a long period of following, windshields are unacceptably chipped or cracked.
While radar is the primary means of tracking the car in front, and almost all vehicles do a nice radar reflection from the rear licence plate, many vehicles have other reflections further forward. You must avoid trying to follow 4m behind the front of a truck! To help this, vehicles in the tests had superior radar reflectors mounted on them.
For good workable convoys, some of these problems need to be solved. It could be that in rain convoys must spread out (losing a lot of the fuel saving) though there is the danger of cars cutting in.
Convoys with longer gaps can still increase road capacity a lot, but they probably have to be robocar convoys. Robocar convoys can handle cars trying to cut into the gaps. They may wish to start honking if somebody cuts-in (and the car in front might also flash its rear lights and slow slightly to make it very clear to the cut-in that they should not have done this.) This would be a problem when convoys are new, as people might not know what it all means, though they would have tried to go into a space that is clearly too small to safety enter. Cars in convoys might need to have a screen on the back that can display a sign “You have barged into a convoy, change lanes immediately or be reported to police.”
Robocars could handle the rain to some degree, but even their laser sensors would not like operating in heavy spray, though their radars would get excellent returns from a reflector on the vehicles.
The stone chip problem is harder to solve. Robocars capable of full auto operation could try to protect their windshields, but this is disconcerting to occupants. And the rest of the car gets stone chips too.
It could be that platooning is only practical with vehicles that are dedicated to it, such as highway commute vehicles and long distance highway vehicles. Built for this purpose, they would just accept the stone chips as part of life. They might come with extra heavy duty wipers or other ways to deal with the rain. And they would be full robocars, able to handle disconnects and independent operation.
This result will disappoint those who felt platoons were a good early technology. I have felt they also suffered from a critical mass problem. To use a platoon, you would need to find one, and until the density of lead vehicles was high enough, you might not find one. You could do it at rush hour with mobile apps that track the presence of lead vehicles so you can time your departure to find one — you might even have an appointment for every commute. And they might run only on nice clean highways on dry days and still be valuable. But less valuable, I am afraid.
On lower speed roads the fuel saving is not much, but the problems are less. There are traffic lights on most low speed roads though which present another problem.
A round-up of just some of the recent robocar news:
Stanford Shelly at 120mph
While the trip up Pikes Peak by Stanford’s Audi TT did not offer the high speeds we had hoped for, they have recently being doing some real race driving tests, clocking the car around a track at 120mph. Even more impressive because this car drives with limited sensors. Here the goal is to test computer driven high-speed tactics — rounding corners, climbing hills and more. While they didn’t quite reach the times of professional drivers, chances are someday they will, just from the perfect understanding of physics.
Driving this fast is hard in the real world because you’re going beyond the range of most sensors (radar and special lidars can go further, and cameras can see very far but are not reliable in all lighting.) The Stanford team had a closed track to they were able to focus on cornering and skidding.
KPMG report on self-driving cars
The consulting firm KPMG has released an extensive report on self-driving cars. While it doesn’t contain too much that is new to readers of this site and blog, it joins the group which believes that car-to-car communication is going to be necessary for proper deployment of robocars. I don’t think so, and in fact think the idea of waiting for it is dangerous.
Speaking of V2V communication
For some time the V2V developers have been planning a testbed project in Ann Arbor, MI. They’ve equipped 3000 cars with “here I am” transponders that will broadcast their GPS data (position and velocity) along with other car data (brake application, turn signals, etc.) using DSRC. It is hoped that while these 3000 civilian cars will mostly wander around town, there will be times when the density of them gets high enough that some experiments on the success of DSRC can be made. Most of the drivers of the cars work in the same zone, making that possible.
If they don’t prove the technology, they probably won’t get the hoped-for 2013 mandate that all future cars have this technology in them. If they don’t get that, the 75mhz of coveted spectrum allocated to DSRC will get other hungry forces going after it.
I owe readers a deeper analysis of the issues around vehicle-to-vehicle communications.
Google cars clock 300,000 miles
Google announced that our team (they are a consulting client) has now logged 300,000 miles of self-driving, with no accidents caused by the software. It was also acknowledged that the team has also converted a hybrid Lexus RX-450h in addition to the Toyota Prius. Certainly a more
comfortable ride and the new system has very nice looks.
Google will also begin internal testing with team members doing solo commutes in the vehicles. Prior policy is vehicles are always operated off-campus with two staff onboard, as is appropriate in prototype systems.
Political attack ad goes after robocars
Jeff Brandes pushed Florida’s legislation to allow robocar testing and operations in that state, 2nd after Nevada. Now his political opponents have produced an ad which suggests robocars are dangerous and you shouldn’t vote for Mr. Brandes because of his support of them. While we should expect just about anything in attack ads, this is a harbinger of the real debate to come. I doubt the authors of the ads really care about robocars — they just hope to find anything that might scare voters. My personal view, as I have said many times, is that while the technology does have to go through a period where it is less safe because it is being prototyped and developed, the hard truth is that the longer we wait to deploy the technology, the more years we rack up with 34,000 killed on the roads in the USA and 1.2 million worldwide. And Florida’s seniors are among the first on the list to need robocars. Is Jim Frishe’s campaign thinking about that?
Collision Warning strongly pushed in Europe
The EU is considering changing its crash-safety rules so that a car can’t get a 5-star rating unless it has forward collision warning, or even forward-collision mitigation (where the system brakes if you don’t.) These systems are already proving themselves, with data suggesting 15% to 25% reductions in crashes — which is pretty huge. While the law would not force vendors to install this, there are certain car lines where a 5-star rating is considered essential to sales.
For years I have posed the following question at parties and salons:
By the 25th century, who will be the household names of the 20th century?
My top contender, Armstrong, has died today. I pick him because the best known name of the 15th century is probably Columbus, also known as the first explorer to a major location — even though he probably wasn’t the actual first.
Oddly, while we will celebrate him today and for years to come, Armstrong was able to walk down the street for the past few decades unlikely to be recognized in his own time. Though I had his photo on my wall as a child (along with Aldrin and Collins.) They were the only faces I ever put on my wall, my childhood heroes. I was not alone in this.
Unlike Columbus, who led his expedition, Armstrong was one of a very large team, the one picked for the most prominent role. He was no mere cog of course, and his flying made the difference in having a successful mission.
Others of the 15th century who are household names today are:
Henry V (thanks to Shakespeare, I suspect) and Richard III
Vlad the Impaler (thanks to legends)
Some artists (Bosch, Botticelli)
Amerigo Vespucci (only by virtue of getting two continents named after him)
As we see, some are famous by accident (writers etc. picked up their stories.) That may even be true for Jeanne d’Arc whose story would mostly only have been preserved in French lore.
The great inventors and scientists like Gutenberg and Leonardo give a clue to help. Guru Nanak founded a major religion but his name is not know well outside that religion.
So while many people suggest Hitler will be one of the names, I am more doubtful. I think it would be appropriate if his evil is forgotten, after all he wasn’t even the greatest butcher of the 20th century.
No, I think the fame will go to explorers and scientists, and possibly some artists from our time. We may not even know what names will be romantacised. Some candidates I suspect are:
Drexler or Feynman if nanotechnology as they envisioned it arrives
Crick and Watson (or even Venter) if control of DNA is seen as central
Von Neumann, Turing or others if computers are seen as the great invention of the 20th century (which they may be.)
It’s hard to say what music, writing, movies or other art will endure and be remembered. Did the 20th century get a Shakespeare?
What are your nominations? Of the people I list above, once agan all of them were capable of walking down the street without being recognized, just as Armstrong could. I suspect in the pre-camera days, so could Columbus and Gutenberg.
I’m watching the Olympics, and my primary tool as always is MythTV. Once you do this, it seems hard to imagine watching them almost any other way. Certainly not real time with the commercials, and not even with other DVR systems. MythTV offers a really wide variety of fast forward speeds and programmable seeks. This includes the ability to watch at up to 2x speed with the audio still present (pitch adjusted to be natural) and a smooth 3x speed which is actually pretty good for watching a lot of sports. In addition you can quickly access 5x, 10x, 30x, 60x, 120x and 180x for moving along, as well as jumps back and forth by some fixed amount you set (like 2 minutes or 10 minutes) and random access to any minute. Finally it offers a forward skip (which I set to 20 seconds) and a backwards skip (I set it to 8 seconds.)
MythTV even lets you customize these numbers so you use different nubmers for the Olympics compared to other recordings. For example the jumps are normally +/- 10 minutes and plus 30 seconds for commercial skip, but Myth has automatic commercial skip.
A nice mode allows you to go to smooth 3x speed with closed captions, though it does not feature the very nice ability I’ve seen elsewhere of turning on CC when the sound is off (by mute or FF) and turning it off when sound returns. I would like a single button to put me into 3xFF + CC and take me out of it.
Anyway, this is all very complex but well worth learning because once you learn it you can consume your sports much, much faster than in other ways, and that means you can see more of the sports that interest you, and less of the sports, commercials and heart-warming stories of triumph over adversity that you don’t. With more than 24 hours a day of coverage it is essential you have tools to help you do this.
I have a number of improvements I would like to see in MythTV like a smooth 5x or 10x FF (pre-computed in advance) and the above macro for CC/FF swap. In addition, since the captions tend to lag by 2-3 seconds it would be cool to have a time-sync for the CC. Of course the network, doing such a long tape delay, should do that for you, putting the CC into the text accurately and at the moment the words are said. You could write software to do that even with human typed captions, since the speech-recognition software can easily figure out what words match once it has both the audio and the words. Nice product idea for somebody.
Watching on the web
This time, various networks have put up extensive web offerings, and indeed on NBC this is the only way to watch many events live, or at all. Web offerings are good, though not quite at the quality of over-the-air HDTV, and quality matters here. But the web offerings have some failings read more »
A month ago I hosted Vernor Vinge for a Bay Area trip. This included my interview with Vinge at Google on his career and also a special talk that evening at Singularity University. In the 1980s, Vinge coined the term “the singularity” even though Ray Kurzweil has become the more public face of the concept of late.
He did not disappoint with an interesting talk on what he called “group minds.” He does not refer to the literal group minds that his characters known as the Tines have in his Zone novels, but rather all the various technologies that are allowing ordinary humans to collaborate in new ways that allow problems to be solved at a speed and scale not seen before. In puzzling over the various paths to the singularity — which means to him the arrival of an intelligence beyond our own — he and others have mostly put the focus on the creation of AI at human level and beyond. He points out that tools which use elements of AI to combine human thinking may generate a path to the singularity that is more probably benign.
In the talk he outlines a taxonomy of group minds, different ways in which they might form and exist, to help understand the space.
Any speaker or lecturer is familiar with a modern phenomenon. A large fraction of your audience is using their tablet, phone or laptop doing email or surfing the web rather than paying attention to you. Some of them are taking notes, but it’s a minority. And it seems we’re not going to stop this, even speakers do it when attending the talks of others.
However, while we have open wireless networks (which we shouldn’t) there is a trick that could be useful. Build a tool that sniffs the wireless net and calculates what fraction of the computers are doing something that suggests distraction — or doing anything on the internet at all.
While you could get creepy here and do internal packet inspection to see precisely what people are doing (for example, are they searching wikipedia for something you just talked about?) you don’t need to go that far. The simple fact that more people in the room are doing stuff on the internet, or doing heavy stuff on the internet is a clue. You can also tell when people are doing a few core functions, like web surfing vs. SMTP vs. streaming based on the port numbers they are going to. You can also tell if they are doing a common web-mail with the IP address. All of this works even if they are encrypting all their traffic like they should be (to stop prying tools like this!)
Only if they have set up a VPN (which they also should) will you be unable to learn things like ports and IP addresses, but again, it’s a nice indicator to know just what total traffic is, and how many different machines it’s coming from, and that will almost never be hidden.
When the display tells you that most of your audience is using the internet, you could pause and ask for questions or find out why they are surfing. The simple act of asking when distraction gets high will reduce it, and make people embarrassed to have done so. Of course, a sneaky program that learns the MACs of various students could result in the professor asking, “What’s so fascinating on the internet, Mr. Wilson?” At the very least it would encourage the people in the audience to use more encryption. But you don’t have to get that precise. The broad traffic patterns are plenty of information.