Brad Templeton is Chairman Emeritus of the EFF, Singularity U computing chair, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
A follow-up thought about yesterday’s shuttle fly-by and panorama. I was musing, might this be perhaps the most photographed single thing in human history to date?
Here’s the reasoning. Today there are more cameras and more photographers than ever, and people use them all the time in a way that continues to grow. To be a candidate for a most-photographed event, you would need to be recent, and you would need to take place in front of a ton of people, ideally with notice. It seemed like just about everybody in Sacramento, the Bay Area and LA was out for this and holding up a phone or camera.
Of course, many objects are more photographed, like the Golden Gate Bridge the shuttle flew over, but I’m talking here of the event rather than the object. Of course this is an event which moved over the course of thousands of miles.
The other shuttle fly-overs done over New York and Washington — also with large populations
Total eclipses of the sun which go over highly populated areas. The 2009 eclipse went over Shanghai, Varanasi and many other hugely populated areas but was clouded out for many. Nobody has yet to make a photo of an eclipse that looks like an eclipse, of course — I’ve seen them all, including many of the clever HDRs and overlays — but that doesn’t stop people from trying.
The 1999 eclipse did go over a number of large European cities, but this was before the everybody-is-photographing era
Most lunar eclipses are seen by as much as half the world, though they are hard to photograph with consumer camera gear, and only a fraction of people go out to watch and photograph them, but they could easily be a winner.
Prior to the digital era, a possible winner might be the moon landing. Back in 1969, every family had a camera, though usage wasn’t nearly what it is today. However, I remember the TV giving lessons on how to photograph a TV screen. Everybody was shooting their TV for the launches and the walk on the moon. Terrible pictures (much like early camera phone pictures) but people took them to be a part of the event. I recall taking one myself though I have no idea where it is.
Of course there may be objective ways to measure this today, by tracking the number of photos on photo sharing and social sites, and extrapolating the winner. If the shuttle is the winner for now, it won’t last long. Photography is going to grow even more.
I should also note that remote photography, like we did for Apollo, is clearly much larger, in the form of recording video. For those giant events viewed by billions — World Cup, Olympics, Oscars etc. — huge numbers of people are recording them, at least temporarily.
Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo. We all figured the plane would come along the airstrip, but they surprised us, having it fly a bit to the west so it suddenly appeared from behind the skeleton of Hangar One, the old dirigible hangar. That turned out to be bad for my photography, as I didn’t get much advance notice, and the shot of the crowd I had done a few minutes before had everybody expectantly looking along the runway, and not towards the west where the plane and shuttle appear in my photo.
However, it did make for a very dramatic arrival. So while different parts of this shot are at slightly different times, it does capture the scene of Moffett field and the crowd awaiting the shuttle, and its arrival. I do however have a nice hi-res photo for you to enjoy as well as the panoramic shot of the Endeavour shuttle fly-by.
Tomorrow (Wed Sep 19) I will give a robocars talk at Dorkbot SF in San Francisco. Dorkbot is a regular gathering of “People doing strange things with electricity” and there will be two other sessions.
Last week, the SARTRE project announced it was concluding after a long period of work on highway platooning. Volvo lead the project which demonstrated platoons on test tracks and on some real roads. They also did a number of worthwhile users studies in simulation.
People have been interested in platooning for a while. The main upsides they are looking for are:
It’s much easier than a Robocar — the platoon is lead by a truck with a professional driver who handles everything with human intelligence
Putting the car at short spacings can result in a huge increase in highway capacity, though you tend to want somewhat larger headways around the convoys
There is fuel saving — about 10% or so for the lead vehicle, and up to 30% for following vehicles, at spacings of about 4 to 6m. This is not quite as much as people hoped but it is real.
The equipment in the following cars is simple — V2V radios and possibly some radar for backup.
Unfortunately, platooning comes with some downsides as well
If you have an accident, it can be catastrophic as you might crash a whole convoy of vehicles.
Non-platoon drivers may interfere with the convoy. The gaps must be kept small enough that nobody tries to enter them. A non-member in the middle of the convoy is bad news. You need small gaps to save fuel too.
Trucks must go only at the front of the convoy due to their longer stopping distance. New trucks must insert in the middle. Cars can insert more easily at the end of the convoy.
Convoys in the right lane can make it harder for people merging, and in general they can present a barrier to traffic.
Driving with a short gap is disconcerting. Behind a truck, you can’t even see the lane markers.
In rain, your windshield gets completely washed out with spray (and sometimes salt spray) which is even more disconcerting.
Following cars get hit by small stones and debris from the forward vehicle. After a long period of following, windshields are unacceptably chipped or cracked.
While radar is the primary means of tracking the car in front, and almost all vehicles do a nice radar reflection from the rear licence plate, many vehicles have other reflections further forward. You must avoid trying to follow 4m behind the front of a truck! To help this, vehicles in the tests had superior radar reflectors mounted on them.
For good workable convoys, some of these problems need to be solved. It could be that in rain convoys must spread out (losing a lot of the fuel saving) though there is the danger of cars cutting in.
Convoys with longer gaps can still increase road capacity a lot, but they probably have to be robocar convoys. Robocar convoys can handle cars trying to cut into the gaps. They may wish to start honking if somebody cuts-in (and the car in front might also flash its rear lights and slow slightly to make it very clear to the cut-in that they should not have done this.) This would be a problem when convoys are new, as people might not know what it all means, though they would have tried to go into a space that is clearly too small to safety enter. Cars in convoys might need to have a screen on the back that can display a sign “You have barged into a convoy, change lanes immediately or be reported to police.”
Robocars could handle the rain to some degree, but even their laser sensors would not like operating in heavy spray, though their radars would get excellent returns from a reflector on the vehicles.
The stone chip problem is harder to solve. Robocars capable of full auto operation could try to protect their windshields, but this is disconcerting to occupants. And the rest of the car gets stone chips too.
It could be that platooning is only practical with vehicles that are dedicated to it, such as highway commute vehicles and long distance highway vehicles. Built for this purpose, they would just accept the stone chips as part of life. They might come with extra heavy duty wipers or other ways to deal with the rain. And they would be full robocars, able to handle disconnects and independent operation.
This result will disappoint those who felt platoons were a good early technology. I have felt they also suffered from a critical mass problem. To use a platoon, you would need to find one, and until the density of lead vehicles was high enough, you might not find one. You could do it at rush hour with mobile apps that track the presence of lead vehicles so you can time your departure to find one — you might even have an appointment for every commute. And they might run only on nice clean highways on dry days and still be valuable. But less valuable, I am afraid.
On lower speed roads the fuel saving is not much, but the problems are less. There are traffic lights on most low speed roads though which present another problem.
A round-up of just some of the recent robocar news:
Stanford Shelly at 120mph
While the trip up Pikes Peak by Stanford’s Audi TT did not offer the high speeds we had hoped for, they have recently being doing some real race driving tests, clocking the car around a track at 120mph. Even more impressive because this car drives with limited sensors. Here the goal is to test computer driven high-speed tactics — rounding corners, climbing hills and more. While they didn’t quite reach the times of professional drivers, chances are someday they will, just from the perfect understanding of physics.
Driving this fast is hard in the real world because you’re going beyond the range of most sensors (radar and special lidars can go further, and cameras can see very far but are not reliable in all lighting.) The Stanford team had a closed track to they were able to focus on cornering and skidding.
KPMG report on self-driving cars
The consulting firm KPMG has released an extensive report on self-driving cars. While it doesn’t contain too much that is new to readers of this site and blog, it joins the group which believes that car-to-car communication is going to be necessary for proper deployment of robocars. I don’t think so, and in fact think the idea of waiting for it is dangerous.
Speaking of V2V communication
For some time the V2V developers have been planning a testbed project in Ann Arbor, MI. They’ve equipped 3000 cars with “here I am” transponders that will broadcast their GPS data (position and velocity) along with other car data (brake application, turn signals, etc.) using DSRC. It is hoped that while these 3000 civilian cars will mostly wander around town, there will be times when the density of them gets high enough that some experiments on the success of DSRC can be made. Most of the drivers of the cars work in the same zone, making that possible.
If they don’t prove the technology, they probably won’t get the hoped-for 2013 mandate that all future cars have this technology in them. If they don’t get that, the 75mhz of coveted spectrum allocated to DSRC will get other hungry forces going after it.
I owe readers a deeper analysis of the issues around vehicle-to-vehicle communications.
Google cars clock 300,000 miles
Google announced that our team (they are a consulting client) has now logged 300,000 miles of self-driving, with no accidents caused by the software. It was also acknowledged that the team has also converted a hybrid Lexus RX-450h in addition to the Toyota Prius. Certainly a more
comfortable ride and the new system has very nice looks.
Google will also begin internal testing with team members doing solo commutes in the vehicles. Prior policy is vehicles are always operated off-campus with two staff onboard, as is appropriate in prototype systems.
Political attack ad goes after robocars
Jeff Brandes pushed Florida’s legislation to allow robocar testing and operations in that state, 2nd after Nevada. Now his political opponents have produced an ad which suggests robocars are dangerous and you shouldn’t vote for Mr. Brandes because of his support of them. While we should expect just about anything in attack ads, this is a harbinger of the real debate to come. I doubt the authors of the ads really care about robocars — they just hope to find anything that might scare voters. My personal view, as I have said many times, is that while the technology does have to go through a period where it is less safe because it is being prototyped and developed, the hard truth is that the longer we wait to deploy the technology, the more years we rack up with 34,000 killed on the roads in the USA and 1.2 million worldwide. And Florida’s seniors are among the first on the list to need robocars. Is Jim Frishe’s campaign thinking about that?
Collision Warning strongly pushed in Europe
The EU is considering changing its crash-safety rules so that a car can’t get a 5-star rating unless it has forward collision warning, or even forward-collision mitigation (where the system brakes if you don’t.) These systems are already proving themselves, with data suggesting 15% to 25% reductions in crashes — which is pretty huge. While the law would not force vendors to install this, there are certain car lines where a 5-star rating is considered essential to sales.
For years I have posed the following question at parties and salons:
By the 25th century, who will be the household names of the 20th century?
My top contender, Armstrong, has died today. I pick him because the best known name of the 15th century is probably Columbus, also known as the first explorer to a major location — even though he probably wasn’t the actual first.
Oddly, while we will celebrate him today and for years to come, Armstrong was able to walk down the street for the past few decades unlikely to be recognized in his own time. Though I had his photo on my wall as a child (along with Aldrin and Collins.) They were the only faces I ever put on my wall, my childhood heroes. I was not alone in this.
Unlike Columbus, who led his expedition, Armstrong was one of a very large team, the one picked for the most prominent role. He was no mere cog of course, and his flying made the difference in having a successful mission.
Others of the 15th century who are household names today are:
Henry V (thanks to Shakespeare, I suspect) and Richard III
Vlad the Impaler (thanks to legends)
Some artists (Bosch, Botticelli)
Amerigo Vespucci (only by virtue of getting two continents named after him)
As we see, some are famous by accident (writers etc. picked up their stories.) That may even be true for Jeanne d’Arc whose story would mostly only have been preserved in French lore.
The great inventors and scientists like Gutenberg and Leonardo give a clue to help. Guru Nanak founded a major religion but his name is not know well outside that religion.
So while many people suggest Hitler will be one of the names, I am more doubtful. I think it would be appropriate if his evil is forgotten, after all he wasn’t even the greatest butcher of the 20th century.
No, I think the fame will go to explorers and scientists, and possibly some artists from our time. We may not even know what names will be romantacised. Some candidates I suspect are:
Drexler or Feynman if nanotechnology as they envisioned it arrives
Crick and Watson (or even Venter) if control of DNA is seen as central
Von Neumann, Turing or others if computers are seen as the great invention of the 20th century (which they may be.)
It’s hard to say what music, writing, movies or other art will endure and be remembered. Did the 20th century get a Shakespeare?
What are your nominations? Of the people I list above, once agan all of them were capable of walking down the street without being recognized, just as Armstrong could. I suspect in the pre-camera days, so could Columbus and Gutenberg.
I’m watching the Olympics, and my primary tool as always is MythTV. Once you do this, it seems hard to imagine watching them almost any other way. Certainly not real time with the commercials, and not even with other DVR systems. MythTV offers a really wide variety of fast forward speeds and programmable seeks. This includes the ability to watch at up to 2x speed with the audio still present (pitch adjusted to be natural) and a smooth 3x speed which is actually pretty good for watching a lot of sports. In addition you can quickly access 5x, 10x, 30x, 60x, 120x and 180x for moving along, as well as jumps back and forth by some fixed amount you set (like 2 minutes or 10 minutes) and random access to any minute. Finally it offers a forward skip (which I set to 20 seconds) and a backwards skip (I set it to 8 seconds.)
MythTV even lets you customize these numbers so you use different nubmers for the Olympics compared to other recordings. For example the jumps are normally +/- 10 minutes and plus 30 seconds for commercial skip, but Myth has automatic commercial skip.
A nice mode allows you to go to smooth 3x speed with closed captions, though it does not feature the very nice ability I’ve seen elsewhere of turning on CC when the sound is off (by mute or FF) and turning it off when sound returns. I would like a single button to put me into 3xFF + CC and take me out of it.
Anyway, this is all very complex but well worth learning because once you learn it you can consume your sports much, much faster than in other ways, and that means you can see more of the sports that interest you, and less of the sports, commercials and heart-warming stories of triumph over adversity that you don’t. With more than 24 hours a day of coverage it is essential you have tools to help you do this.
I have a number of improvements I would like to see in MythTV like a smooth 5x or 10x FF (pre-computed in advance) and the above macro for CC/FF swap. In addition, since the captions tend to lag by 2-3 seconds it would be cool to have a time-sync for the CC. Of course the network, doing such a long tape delay, should do that for you, putting the CC into the text accurately and at the moment the words are said. You could write software to do that even with human typed captions, since the speech-recognition software can easily figure out what words match once it has both the audio and the words. Nice product idea for somebody.
Watching on the web
This time, various networks have put up extensive web offerings, and indeed on NBC this is the only way to watch many events live, or at all. Web offerings are good, though not quite at the quality of over-the-air HDTV, and quality matters here. But the web offerings have some failings read more »
A month ago I hosted Vernor Vinge for a Bay Area trip. This included my interview with Vinge at Google on his career and also a special talk that evening at Singularity University. In the 1980s, Vinge coined the term “the singularity” even though Ray Kurzweil has become the more public face of the concept of late.
He did not disappoint with an interesting talk on what he called “group minds.” He does not refer to the literal group minds that his characters known as the Tines have in his Zone novels, but rather all the various technologies that are allowing ordinary humans to collaborate in new ways that allow problems to be solved at a speed and scale not seen before. In puzzling over the various paths to the singularity — which means to him the arrival of an intelligence beyond our own — he and others have mostly put the focus on the creation of AI at human level and beyond. He points out that tools which use elements of AI to combine human thinking may generate a path to the singularity that is more probably benign.
In the talk he outlines a taxonomy of group minds, different ways in which they might form and exist, to help understand the space.
Any speaker or lecturer is familiar with a modern phenomenon. A large fraction of your audience is using their tablet, phone or laptop doing email or surfing the web rather than paying attention to you. Some of them are taking notes, but it’s a minority. And it seems we’re not going to stop this, even speakers do it when attending the talks of others.
However, while we have open wireless networks (which we shouldn’t) there is a trick that could be useful. Build a tool that sniffs the wireless net and calculates what fraction of the computers are doing something that suggests distraction — or doing anything on the internet at all.
While you could get creepy here and do internal packet inspection to see precisely what people are doing (for example, are they searching wikipedia for something you just talked about?) you don’t need to go that far. The simple fact that more people in the room are doing stuff on the internet, or doing heavy stuff on the internet is a clue. You can also tell when people are doing a few core functions, like web surfing vs. SMTP vs. streaming based on the port numbers they are going to. You can also tell if they are doing a common web-mail with the IP address. All of this works even if they are encrypting all their traffic like they should be (to stop prying tools like this!)
Only if they have set up a VPN (which they also should) will you be unable to learn things like ports and IP addresses, but again, it’s a nice indicator to know just what total traffic is, and how many different machines it’s coming from, and that will almost never be hidden.
When the display tells you that most of your audience is using the internet, you could pause and ask for questions or find out why they are surfing. The simple act of asking when distraction gets high will reduce it, and make people embarrassed to have done so. Of course, a sneaky program that learns the MACs of various students could result in the professor asking, “What’s so fascinating on the internet, Mr. Wilson?” At the very least it would encourage the people in the audience to use more encryption. But you don’t have to get that precise. The broad traffic patterns are plenty of information.
I’m here in Newport beach at the Transportation Research Board’s conference on self-driving vehicles. Today in a pre-session there was discussion of pre-robocar technologies and in particular applications of “managed lanes” and what the might mean for these technologies. Managed lanes are things like HOV/carpool lanes, HOT (carpool+toll), reversible lanes etc. Many people imagine these lanes would be used with pre-robocar technologies like convoys, super-cruise, cooperative ACC, Bus Rapid Transit etc.
As I’ve said before the first rule of robocars is “you don’t change the infrastructure.” First you must make the vehicles operate fully on the existing infrastructure. And people are doing that. But we can also investigate what happens next.
Robocars as many envision them do not thus need dedicated lanes, even though some of the simpler technologies might. Earlier we talked about electrification which is a pretty expensive adaptation. Let’s talk about high speed lanes.
Robocars (or any car) would be of much greater interest to people if they could go very fast in them. On one hand, the ability to work, read, watch video and possibly sleep in a robocar will mean to some that trip time is less important than comfort, and they might actually be happy with a slower trip with fewer disturbances. But sometimes a faster trip is very important, particularly on the long haul.
Today people are working hard to make robocars safe. Eventually they should be able to make them safe even at higher speeds, particularly on freeways that were designed for fairly high speeds. Even human drivers routinely see over 100mph on the autobahns of Germany. Problem is if you want to go 120mph outside of Germany , there’s no road you can easily do that on. The other cars, going 65 to 80mph in the fast lane will get in the way, creating an uncomfortable ride and possibly dangerous situations.
Many of today’s “managed lanes” are primarily for use in rush hour, from 5am to 9am and 3pm to 7pm. In other hours, traffic is very light. What if that special lane does not just become an ordinary lane after rush hour, but instead is converted to another special purpose. There are a lot of different technologies that might be able to become viable with such a lane.
The most interesting one to me is high speed. If the carpool lane switched to being the high-speed-car lane at 9:30am, I actually think a lot of people might very well delay their commutes and shift their hours. A one-hour commute at 8am or a 15 minute trip at 9:30am — not a hard choice for many. And lots of people travel mid-day for various purposes.
The high-speed lane would actually mandate a minimum speed, perhaps 100mph when the road is clear. To get in this lane you would need a car that is certified safe at that speed or above. This might be a robocar, but it might also be a human-driven car with sufficient driver-assist technologies to certify it safe at that speed. The lane would probably only be open in good weather, and would probably revert to ordinary status in the event the main road got congested for whatever reason. Vehicles in the lane would have to be connected vehicles, ready to receive signals about changes to the dynamic status of the lane.
There probably would also be a requirement for efficient vehicles. Wind drag at 120mph costs 4 times as much fuel per mile as wind drag at 60mph. These cars would have to be highly aerodynamic designs. They might also be capable of platooning to further reduce drag, though you would want to wait a while to assure safety before platooning at 120mph. You might insist on alternate fuels or even that they be electric vehicles or other low emission vehicles. It doesn’t matter — I think there are a lot of people who would pay a lot of money to be able to go 120mph.
The lanes in general would need to be separated from the main lanes. Most carpool lanes are already like that, though most of the ones in the SF Bay Area are not this style. Ideally they would be the style that even has a special merge lane at the points where entry and egress from the main lanes are possible.
If such a program were a success we could see more. For example, one could imagine adding an extra lane to Interstate 5 in the California central valley and have it be a high speed lane most of the time. The planned California High Speed Rail, which probably will never be finished, is forecast to cost $68 Billion. 2 extra lanes on I-5 in the central valley south of Sacramento would cost well under a billion, and offer fairly high speed travel to those in the valley — faster door to door than the HSR. And my calculations even suggest that aerodynamic electric vehicles would use less energy per passenger-mile than the HSR. (Definitely if they are shared by as few as 2-3 people or when designed for a platoon.) These teardrop-shaped cars would also be much more efficient than today’s cars when they slow down and ply the ordinary highways and streets.
It is not trivial to go 120mph in a robocar though. Your sensors must be long range so you can stop if they see something. If you want to build infrastructure, here is where the road might have sensors which can report on road obstacles and other vehicles to assure safety. If you’re building a whole high speed lane this is not an issue. The first rule of robocars is written to avoid needing new infrastructure to do ordinary driving and get most places — not to prevent you from taking advantage of new spending that justifies itself.
An MIT team has been working on a car that is “hard to crash.” Called the intelligent co-pilot it is not a self-driving car, but rather a collection of similar systems designed to detect if you are about to hit something and try to avoid it. To some extent, it actually wrests control from the driver.
When I first puzzled over the roadmap to robocars I proposed this might be one of the intermediary steps. In particular, I imagined a car where, in a danger situation, the safest thing to do is to let go of the wheel and have the car get you to a safe state. This car goes further, actually resisting you if you try to drive the car off the road or towards an obstacle.
This is a controversial step, and the reasons are understood by the MIT team. First of all, from a legal liability standpoint, vendors are afraid of overriding the human. If a person is in control of a vehicle and makes a mistake, they are liable. If a machine takes over and saves the day, it’s great, but if the machine takes over and there is an accident — an accident the human could have avoided — there could be high risks to the maker of the machine as well as the occupant. In most designs, the system is set up so that the human has the opportunity for control at all times.
Actually, it’s even worse. A number of car makers are building freeway autopilots which still require attention from the driver in case the lane markers disappear or other problems ensue. One way some of them have built this is to require the driver to touch the wheel every so often to show they are alert. They will beep if the driver does not touch the wheel, and they will even disengage if the driver waits for too long after the beep. Consider what the companies have interpreted the liability system to require: That the right course of action, when the system is driving and the driver has her hands off the wheel, is to disengage and let the vehicle wander freely and possibly careen off the road! Of course, they don’t want the vehicle to do that, but they want to make it clear to the driver that they can’t depend on the system, can’t decide to type a long E-mail while it is running.
And this relates to the final problem of human accommodation. When a system makes people safer, they compensate by being more reckless. For example, anti-lock brakes are great and prevent wheel lock-up on slippery roads — but they cause drivers to feel they have invincible brakes and studies show they drive more aggressively because of them. Only a safe robocar avoids this problem; its decisions will always be based on a colder analysis of the situation.
A hard-to-crash car is still a very good idea. Before a full robocar is available, it can make a lot of sense, particularly for aging people and teens with new licences. But it may never come to market due to liability concerns.
Vernor Vinge is perhaps the greatest writer of hard SF and computer-related SF today. He has won 5 Hugo awards, including 3 in a row for best novel (nobody has done 4 in a row) and his novels have inspired many real technologies in cyberspace, augmented reality and more.
I invited him up to speak at Singularity University but before that he visited Google to talk in the Authors@Google series. I interview him about his career and major novels and stories, including True Names, A Fire Upon the Deep, Rainbow’s End and his latest novel Children of the Sky. We also talk about the concept of the Singularity, for which he coined the term.
There have been experiments with dedicated lanes in the past, including a special automated lane back in the 90s in San Diego. The problem is much easier to solve (close to trivial by today’s standards) if you have a dedicated lane, but this violates the first rule of robocars in my book — don’t change the infrastructure.
Aside from the huge cost of building the dedicated lanes, once you have built a lane you now have a car which can only drive itself in that dedicated lane. That’s a lot less valuable to the customer, effecitvely you are only get customers who happen to commute on that particular route, rather than being attractive to everybody. And you can’t self-drive on the way to or from the highway, so it is not clear what they mean when they say the driver sets a destination, other than perhaps the planned exit.
Yes, the car is a lot cheaper but this is a false economy. Robocar sensors are very expensive today but Moore’s law and volume will make them cheaper and cheaper over time. Highway lanes are not on any Moore’s law curve, in fact they are getting more expensive with time. And if the lane is dedicated, that has a number of advantages, though it comes with a huge cost.
Of course, today, nobody has a robocar safe enough to sell to consumers for public streets. But I think that by the early 2020s, when this study might recommend completing a highway, the engineers would open up the new lane and find that while it’s attractive for its regular nature and especially attractive if it is restricted and thus has lighter and more regular traffic, the cars are already able to drive on the regular lanes just fine.
A better proposal, once robocars start to grow in popularity, would be to open robocar lanes during rush hour, like carpool lanes. These lanes would not be anything special, though they would feature a few things to make the car’s job easier, such as well maintained markings, magnets in the road if desired, no changes in signage or construction without advance notice etc. But most of all they would be restricted during rush hour so that cars could take advantage of the smooth flow and predictable times that would come with all cars being self-driving. Unless humans kept taking over the cars and braking when they got scared or wanted to look at an accident in the other lanes, these lanes would be metered and remain free of traffic jams. However, you need enough robocar flow to justify them since if you only use half the capacity of a lane it is wasteful. On the other hand, such lanes could be driven by the more common “super cruise” style cars that just do lane following and ACC.
Hats off to the video embedded below, which was prepared for a futuristic transportation expo in my home town of Toronto.
Called the PAT (People and Things) this video outlines the UI and shows a period in the day of a robotic taxi/delivery vehicle as it moves around Toronto picking up people and packages.
I first learned about the video from a new blog on the subject of consumer self driving cars — as far as I know the second serious one to exist after this one. The Driverless Car HQ started up earlier this year and posts with a pretty solid volume. They are more comprehensive in posting various items that appear in the media than I am, and cover some areas I don’t, so you may want to check them out. (That’s a conscious choice on my part, as I tend not to post links to stories that I judge don’t tell us much new. An example would be that the SARTRE road train just did a demo in Spain last month, but it was not much different from demos they had done before.)
Of course, as I said earlier, sadly “Driverless Car” is one of my least favourite terms for this technology, but that doesn’t impede the quality of the blog. In addition, while I do report news on the Google car on this blog, I tend to refrain from commentary due to being on that team, and the folks at DCHQ are not constrained this way.
Face recognition of passengers as they approach the car
Automatic playing of media for the passengers (apparently resuming from media paused earlier in some cases)
Doing package delivery work when needed
Self-cleaning after each passenger
Optional ride-share with friends
In-car video conferencing on the car’s screens
Offering the menu of a cafe which is the destination of a trip. (Some suspect this is a location-based ad spam, but I think it’s a more benign feature because the passenger is picking up his ride-share friend at the cafe.)
And the UIs are slick, if a bit busy, and nicely done.
The concept vehicle at the Brickworks is fairly simple but does present some ideas I have felt are worthwhile, such as single passenger vehicles, face to face seating etc. It’s a bit too futuristic, and not aerodynamic. In the concept, it adjusts for the handicapped. I actually think that’s the reverse of what is likely to happen. Rather than making all cars able to meet all needs, it makes more sense to me to have specialized cars that are cheaper and more cost effective at their particular task, and have dedicated (more expensive) vehicles for wheelchairs. (For example, I like the hollow vehicles like the Kenguru.) I think you serve the disabled better for the same money by having these specialized vehicles — the wait may be slightly longer, but the vehicle can be much better at serving the individual’s needs.
Ford, which has already touted the value of robocars, has announced plans to do a traffic-assist autopilot system sometime mid-decade. Ford joins Mercedes, VW/Audi and Cadillac in announcing such systems. Ford’s vehicle will also offer automatic parking in perpendicular parking spots. For some time many cars have offered automated parallel parking. Since most people do not find perpendicular parking all that difficult, perhaps their goal here is very tight spaces (though that would require getting out of the car and blocking the rude driver, which I have found out only gets your car vandalized) or possibly parking in a personal garage that is very thin.
AUVSI and Mercedes
On the negative front, Mercedes appears to have backed off their plan to offer a traffic jam assistant in the 2013 S class. Earlier in June I attended the AUVSI “Driverless Car Summit” in Detroit, and Mercedes indicated that while they do have that technology in their F.800 concept car, this is only a prototype. As currently set up, the Mercedes system requires you to touch the wheel every 8 seconds. Honda was promoting this in 2006. Mercedes also showed their “6D” stereo vision based system which demonstrated impressive object tracking. They also claimed it does as well in differing light conditions, which would be a major breakthrough.
Some other notes from the conference:
There was effectively universal hate for the term “driverless car.” I join the haters, since the car has a driver, but it’s a computer. No other term won big support, though.
While AUVSI is about unmanned military vehicles, they put on a nicely demilitarized conference, which was good.
There were still a lot of fans of DSRC (a car data radio protocol) and V2V communications. Some from that community have now realized they went down the wrong path but a lot had made major career investments and will continue to push it, including inside the government.
The NHTSA is doing a research project on how they might regulate safety standards. They have not laid out a strategy but will be looking at sensor quality, low level control system squality, UI for the handoff between manual and self-driving and testing methodology.
I liked Mercedes’ terms for various modes of self-driving: Feet off, Hands off, Eyes off and Body out. The car companies are aiming at hands off, Google is working on Eyes Off but Body out (which means being so good that the car can operate without anybody in it or without any attention from the occupant) is the true robocar and the long term goal for many but not all projects.
Continental showed more about their own cruising system that combines lane-keeping and automatic cruise-control. They now say they have the 10,000 miles of on-road testing needed for the Nevada testing licence, but have not yet decided if they will get one. There is some question is what they are doing requires a licence under the Nevada regulations. (I suspect it does not.) However, they were quizzed as to whether they were testing in Nevada without a licence, which they deny. Continental says their system is built entirely from parts that will be “production parts” as of early 2013.
Legal and states panels showed progress but not too much news. States seem to be pleased so far.
The National Federation for the Blind showed off their blind driving challenge. They have become keen on building a car which has enough automation for a blind person to operate but still uses the blind driver’s skills (such as hearing and thinking) to make the task possible. This is an interesting goal for the feeling of autonomy, but I suspect it is more likely they will just get full-auto cars sooner, and they accept this is likely.
Know me by my flyer number and don’t repeat things to me I’ve already certified as knowing, like safety rules
Know my language (I input it, after all) and don’t bother me with announcements other than in my best understood language
Show me most things as text, perhaps in a crawl under my show. If need be, have me confirm I understand.
Tailor the message to my age and my location in the plane. Show me exits on the screen for my seat.
Cut back on the spam about how great your airline is, how wonderful the FF plan is or why I should buy duty free.
Today, instead you can see the visible frustration on the faces of flyers as their movie is interrupted so that they can here the translation into Russian of the long announcement they just heard in German and English.
Having good custom in-flight entertainment is good, and considered a major competitive feature, but already I see more and more people preferring to put out a tablet, even when they have a super-fancy system in the first-class seats. The tablet of course does not have the interruptions (even for the tiny number of real announcements such as in our case last week, “we can’t get the landing gear doors to close so we’re dumping fuel and returning to the airport”) and it also has, if you prepare it, customized entertainment that you know you want to watch.
Frankly, I am not sure who programs the video selections on many of the airlines but I have to suspect they don’t just try to get the best movies with good reviews. They either try to get the cheapest movies or have deals with certain studios — it’s amazing how few quality films they might have in a selection of 100 movies.
I also remain disappointed at how badly implemented most of the in-flight systems I have seen. They are all slow, with highly noticeable lags after keypresses, poor touchscreens, freezes and crashes. Any tablet or phone puts them to shame when it comes to UI and responsiveness. And to top it off, they are huge — on main airlines, many of the seats have reduced footroom to fit the box for the video system. (It also has other in-seat electronics I presume but still, it’s about 10x bigger than it needs to be.) This is odd particularly since in planes floor space and weight are at such a premium. A tablet computer, either fixed in the seat or on an anti-theft power/data tether, would provide a better system — smaller, lighter, better UI, cheaper, better screen — in just about every way. Of course when they first designed these seats years ago they did not have cheap tablets but there is little excuse to continue installing the old ones.
Wait, how could they have known? How could they have not known. It’s 2012. We’ve known for decades now that each year computer products get smaller, faster, cheaper and superior in major ways. When you are designing a system to install in the future, it’s a mistake to design it based on the current technology. You should bet that something better will be along and make your design adaptable to it. If nothing else your standard design is going to get faster and higher resolution — which makes the slow response time of the existing systems inexplicable.
Many airlines are starting to offer satellite TV. That’s better than the old limited selections (or in particular a single bad movie) but actually not too appealing. Aside from being full of commercials and ignoring your schedule, with TV the announcements and interruptions make you miss crucial parts of your show as they talk over them. More than once I’ve been watching a show on an airline to have them talk over the climax of the film.
I’m whining a lot but it’s because I do believe this is important. Truth is that on a flight you are often tired and cramped, and reading and working are not tremendously comfortable. I bring a book but read at a reduced speed. Having nice noise-cancelling headphones and a good in-flight entertainment system with quality content can make a make a flight much better, and it’s a shame that so many things are obviously wrong with the systems they have built. Today’s flights are stressful in any cabin, and a quiet and uninterrupted experience would do a lot to increase customer satisfaction.
There’s a lot of excitement about the potential of autonomous drones, be they nimble quadcopters or longer-range fixed wing or hybrid aircraft. A group of students from Singularity University, for example, has a project called MatterNet working to provide transportation infrastructure for light cargo in regions of Africa where roads wash out for half the year.
Closer to home, these drones are not yet legal for commercial use, while government agencies are using them secretly.
Here’s one useful idea: A small set of medical drones scattered around the city. Upon emergency call, they can fly, via a combination of autonomous navigation and remote-human-operated flying at the end, to any destination in the city within a couple of minutes. Call 911 and as soon as you say it’s a medical emergency the drone is on the way. When it gets there, the human operator lands it or even sends it in a balcony on tall buildings with balconies. Somebody has to carry it to the patient if they are far from the outside.
When it gets to the patient it has a camera and conferencing ability to a remote doctor can examine the patient and talk to people around the patient to ask them questions or give them instructions. It also could contain one of those “foolproof defibrillator” modules able to deal with many kinds of heart attacks. They are already in many buildings but this way they could be anywhere. It’s more useful than a taco.
The remote doctor could advise any medical staff who come, or give advice to the ambulance that’s on the way but not getting there for a few minutes. If a medicine that can be administered by a layperson is needed, there might be some in the drone but a second drone could be loaded and dispatched within a few minutes as well — that might take longer to fly but less time than an ambulance. You might not put any valuable medicines in the first drone to prevent people from summoning them just to steal them, though this might just happen for the valuable drone unless steps are taken to make that non-productive.
This should be combined with something I have felt is long overdue in the world of our mobile phones. People who are able to be on-call EMTs and doctors should have their phones updating their locations with a medical service while they are on call for such action. Then anybody with an emergency should be able to summon or get to the closest professional very quickly. (Of course there is no need to record this data after it changes, to avoid making a life-log of the doctor.) Nobody should ever have to ask “is there a doctor in the house?” 911 should be able to say, “There is a doctor 3 doors down, she’s been notified.” But the drone can always come, and bring a remote specialist if need be.
The other barrier to this is network dead zones. A map would need to be made of network dead zones and the drone would not fly into them, though it could fly through them. It would land just outside the dead zone and warn people not to carry it into one if the remote doctor’s services are needed.
Someday, the drone could contain a winner of the X-prize “Medical Tricorder” contest with sensors to diagnose all sorts of conditions, and it might even eventually be a robot able to administer emergency drugs — but the actual delivery and video feed is something we can do today.
One of my first rules of robocars is “you don’t change the infrastructure.” Changing infrastructure is very hard, very expensive, requires buy-in from all sorts of parties who are slow to make decisions, and even if you do change it, you then have a functionality that only works in the places you have managed to change it. New infrastructure takes many decades — even centuries, to become truly ubiquitous.
That’s why robocar enthusiasts have been skeptical of things like ITS plans for roadside to vehicle and vehicle to vehicle communications, plans for dedicated highway lanes with special markers, and for PRT which needs newly built guideways. You have to work with what you have.
There are some ways to bend this rule. Some infrastructure changes are not too hard — they might just require something as simple and cheap as repainting. Some new infrastructures might be optional — they make things better in the places you put them, but they are not necessary to operations. Some might focus on specific problem areas — like special infrastructure in heavy pedestrian areas or parking lots, enabling or improving optional forms of operation in those areas.
Another possiblility is to have robocars enable a form of new infrastucture, turning it upside down. The infrastructure might need the robocars rather than the other way around. I wrote about that sort of plan when discussing a solar panel on a robocar.
A recent proposal from Siemens calls for having overhead electric wires for trucks. Trolley buses and trams use overhead electric wires, and there are hybrid trolley buses (like the Boston T line) which can run either on the wires or on an internal diesel. These trucks are of that type. The main plan for this is to put overhead wires in things like shipping ports, where trucks are running around all the time, and they would benefit greatly from this.
I’ve seen many proposals for electrication of the roads. Overhead wires are problematic because they need to be high enough to go over the trucks and other high vehicles, but that makes them harder to reach by low vehicles. You need two wires and must get good contact. They are also damn ugly. This has lead to proposals for inductive power supplies buried in the road. This is very expensive as it requires tearing up the road. There are also inductive losses, and while you don’t need to make contact, precise driving is important for efficiency. In these schemes, battery-electric cars would be able to avoid using their batteries (and in fact charge them) while on the highway, vastly increasing their range and utility.
Robocars offer highly precise driving. This would make it easier to line up on overhead wires or inductive coils in the road. It even would make it possible to connect with rails in the roadbed, though right now people don’t want to consider having a high voltage rail on the ground, even on a highway.
It was proposed to me (I’m trying to remember by who — my apologies) that one new option would be a rail on the side of the highway. This lane would be right up against the guardrail, and normally would be the shoulder. In the guardrail would be power rails, and a connector would come from the left side of the vehicle. Only a robot would be able to drive so precisely as to do this safely. Even with a long pole and more distance I am not sure people would enjoy trying to drive like this. A grounding rail in the roadbed might also be an option — though again tearing up the roadbed is very expensive to do and maintain.
There is still the problem of having a live rail or wire at reachable height. The system might be built with an enclosed master cable and then segments of live wire which are only live when a vehicle is passing by them. Obviously a person doesn’t want to be there when a car is zooming through. This requires roboust switching eqiupment for the thousands of watts one wishes to transfer. You also have to face the potential that a car from the regular lanes could crash into the rail and wires, and while that’s never going to be safe you don’t want to make it worse. You also need switching if you are going to have accounting, so only those who pay for it get power. (Alternately it could be sold by a subscription so you don’t account for the usage and you identify cars that don’t have a subscriber tag who are sucking juice and fine them.)
There is also the problem that this removes the shoulder which provides safety to other cars and provides a breakdown lane. If a vehicle does have to stop in this lane for emergency reasons, sensors in the rail could make sure that all robocars would know and leave the lane with plenty of margin. They would all have batteries or engines and be able to operate off the power — indeed the power lines need not be continuous, you don’t have to build them in sections of the road where it’s difficult. If other cars are allowed to enter the lane, it must not be dangerous other than physically for them to brush the wires.
It’s also possible that the rail could be inductive. The robocar could drive and keep its inductor contact just a short distance from the coils in the rail. This is more expensive than direct contact, and not as efficient, but it’s a lot cheaper than burying inductors in the roadbed. It’s safe for pedestrians and most impacts, and while a hard impact could expose conductors, a ground fault circuit could interrupt the power. Indeed, because all vehicles on the line will have alternate power, interruption in the event of any current not returning along the return is a reasonable strategy.
For commuters with electric cars, there is a big win. You can get by with far less battery and still go electric. The battery costs a lot of money — more than enough to justify the cost of installing the connection equipment. And having less battery means less weight, and that’s the big win for everybody, as you make the vehicles more efficient when you cut out that weight. Of course, if this lane is only for use by electrified robocars, it becomes a big incentive to get one just to use the special lane.
The power requirements are not small. Cars will want 20kw to go at highway speed, and trucks a lot more. This makes it hard to offer charging as well as operating current, but smaller cars might be able to get a decent charge while driving.
Like most people, I have a lot of different passwords in my brain. While we really should have used a different system from passwords for web authentication, that’s what we are stuck with now. A general good policy is to use the same password on sites you don’t care much about and to use more specific passwords on sites where real harm could be done if somebody knows your password, such as your bank or email.
The problem is that over time you develop many passwords, and sometimes your browser does not remember them for you. So you go back to a site and try to log in, and you end up trying all your old common passwords. The problem: At many sites, if you enter the wrong password too many times, they lock you out, or at least slow you down. That’s not unwise on their part, but a problem for you.
One solution: Sites can remember hashes of your old passwords. If you type in an old password, they can say, “No, that used to be your password but you have a new one now.” And not count that as a failed attempt by a password cracker. This adds a very slight risk, in that it lets a very specific attacker who knows you super well get a few free hits if they have managed to learn your old passwords. But this risk is slight.
Of course they should store a hash of the password, not the actual password. No site should store the actual password. If a site can offer to mail you your old password rather than offering a link to reset the password, it means they are keeping it around. That’s a security risk for you, and also means if you use a common password on such sites, they now know it and can log in as you on all the other sites you use that password at. Alas, it’s hard to tell when creating an account whether a site stores the password or just a hash of it. (A hash allows them to tell if you have typed in the right password by comparing the hash of what you typed and the stored hash of the password back when you created it. A hash is one-way so they can’t go from the hash to the actual password.) Alas, only a small minority of sites do this right.
This is just one of many things wrong with passwords. The only positive about them is you can keep a password entirely in your memory, and thus go to a random computer and login without anything but your brain. That is also part of what is wrong with them, in that others can do that too. And that the remote computers can quite easily be compromised and recording the password. The most secure systems use the combination of something in your memory and information in a device. Even today, though, people are wary of solutions that require them to carry a device. Pretty soon that will change and not having your device will be so rare as to not be an issue.
I’m doing a former-cold-war tour this month and talking about robocars.
This Friday, May 11, I will be giving the 2301st lecture for the Philosophical Society of Washington with my new, Prezi-enabled robocars talk. This takes place around 8pm at the John Wesley Powell Auditorium. This lecture is free.
A week later it’s off to Moscow to enjoy the wonders of Russia.
There will be a short talk locally in between at a private charity event on May 14.
I found this recent article from the editor of the MIT Tech review on why apps for publishers are a bad idea touched on a number of key issues I have been observing since I first got into internet publishing in the 80s. I recommend the article, but if you insist, the short summary is that publishers of newspapers and magazines flocked to the idea of doing iPad apps because they could finally make something they that they sort of recognized as similar to a traditional publication; something they controlled and laid out, that was a combined unit. So they spent lots of money and ran into nightmares (having to design for both landscape and portrait on the tablet, as well as possibly on the phones or even Android.) and didn’t end up selling many subscriptions.
Since the dawn of publishing there has been a battle between design and content. This is not a battle that has or should have a single winner. Design is important to enjoyment of content, and products with better design are more loved by consumers and represent some of the biggest success stories. Creators of the content — the text in this case — point out that it is the text where you find the true value, the thing people are actually coming for. And on the technology side, the value of having a wide variety of platforms for content — from 30” desktop displays to laptops to tablets to phones, from colour video displays to static e-ink — is essential to a thriving marketplace and to innovation. Yet design remains so important that people will favour the iPhone just because they are all the same size, and most Android apps still can’t be used on Google TV.
This is also the war between things like PDF, which attempts to bring all the elements of paper-based design onto the computer, and the purest form of SGMLs, including both original and modern HTML. Between WYSIWYG and formatting languages, between semantic markup and design markup. This battle is quite old, and still going on. In the case of many designers, that is all they do, and the idea that a program should lay out text and other elements to fit a wide variety of display sizes and properties is anathema. To technologists, that layout should be fixed is almost as anathema.
Also included in this battle are the forces of centralization (everything on the web or in the cloud) and the distributed world (custom code on your personal device) and their cousins online and offline reading. A full treatise on all elements of this battle would take a book for it is far from simple.
I sit mostly with the technologists, eager to divide design from content. I still write all my documents in text formatting languages with visible markup and use WYSIWYG text editors only rarely. An ideal system that does both is still hard to find. Yet I can’t deny the value and success of good design and believe the best path is to compromises in this battle. We need compromises in design and layout, we need compromises between the cloud and the dedicated application. End-user control leads to some amount of chaos. It’s chaos that is feared by designers and publishers and software creators, but it is also the chaos that gives us most of our good innovations, which come from the edge.
Let’s consider all the battles I perceive for the soul of how computing, networks and media work:
The design vs. semantics battle (outlined above)
The cloud vs. personal device
Mobile, small and limited in input vs. tethered, large screen and rich in input
Central control vs. the distributed bazaar (with so many aspects, such as)
The destination (facebook) vs. the portal (search engine)
The designed, uniform, curated experience (Apple) vs. the semi-curated (Android) vs. the entirely open (free software)
The social vs. the individual (and social comment threads vs. private blogs and sites)
The serial (email/blogs/RSS/USENET) vs. the browsed (web/wikis) vs. the sampled (facebook/twitter)
The reader-friendly (fancy sites, well filtered feeds) vs. writer friendly (social/wiki)
In most of these battles both sides have virtues, and I don’t know what the outcomes will be, but the original MITTR article contained some lessons for understanding them.