Submitted by brad on Sat, 2011-01-29 16:50.
As readers of this blog surely know, for several years I have been designing, writing and forecasting about the technology of self-driving “robocars” in the coming years. I’m pleased to announce that I have recently become a consultant to the robot car team working at Google.
Of course all that work will be done under NDA, and so until such time as Google makes more public announcements, I won’t be writing about what they or I are doing. I am very impressed by the team and their accomplishments, and to learn more I will point you to my blog post about their announcement and the article I added to my web site shortly after that announcement. It also means I probably won’t blog in any detail about certain areas of technology, in some cases not commenting on the work of other teams because of conflict of interest. However, as much as I enjoy writing and reporting on this technology, I would rather be building it.
My philosophical message about Robocars I have been saying for years, but it should be clear that I am simply consulting on the project, not setting its policies or acting as a spokesman.
My primary interest at Google is robocars, but many of you also know my long history in online civil rights and privacy, an area in which Google is often involved in both positive and negative ways. Indeed, while I was chairman of the EFF I felt there could be a conflict in working for a company which the EFF frequently has to either praise or criticise. I will be recusing myself from any EFF board decisions about Google, naturally. read more »
Submitted by brad on Fri, 2011-01-14 20:58.
Every day I get into my car and drive somewhere. My mobile phone has a lot of useful apps for travel, including maps with traffic and a lot more. And I am usually calling them up.
I believe that my phone should notice when I am driving off from somewhere, or about to, and automatically do some things for me. Of course, it could notice this if it ran the GPS all the time, but that’s expensive from a power standpoint, so there are other ways to identify this:
- If the car has bluetooth, the phone usually associates with the car. That’s a dead giveaway, and can at least be a clue to start looking at the GPS.
- Most of my haunts have wireless, and the phone associates with the wireless at my house and all the places I work. So it can notice when it disassociates and again start checking the GPS. To get smart, it might even notice the MAC addresses of wireless networks it can’t see inside the house, but which it does see outside or along my usual routes.
- Of course moving out to the car involves jostling and walking in certain directions (it has a compass.)
Once it thinks it might be in the car, it should go to a mode where my “in the car” apps are easy to get to, in particular the live map of the location with the traffic displayed, or the screen for the nav system. Android has a “car mode” that tries to make it easy to access these apps, and it should enter that mode.
It should also now track me for a while to figure out which way I am going. Depending on which way I head and the time of day, it can probably guess which of my common routes I am going to take. For regular commuters, this should be a no-brainer. This is where I want it to be really smart: Instead of me having to call up the traffic, it should see that I am heading towards a given highway, and then check to see if there are traffic jams along my regular routes. If it sees one, Then it should beep to signal that, and if I turn it on, I should see that traffic jam. This way if I don’t hear it beep, I can feel comfortable that there is light traffic along the route I am taking. (Or that if there is traffic, it’s not traffic I can avoid with alternate routes.)
This is the way I want location based apps to work. I don’t want to have to transmit my location constantly to the cloud, and have the cloud figure out what to do at any given location. That’s privacy invading and uses up power and bandwidth. Instead the phone should have a daemon that detects location “events” that have been programmed into it, and then triggers programs when those events occur. Events include entering and leaving my house or places I work, driving certain roads and so on.
And yes, for tools like shopkick, they can even be entering stores I have registered. And as I blogged at the very beginning of this blog many years ago, we can even have an event for when we enter a store with a bad reputation. The phone can download a database of places and wireless and Bluetooth MACs that should trigger events, and as such the network doesn’t need to know my exact location to make things happen. But most importantly, I don’t want to have to know to ask if there is something important near me, I want the right important things to tell me when I get near them.
Submitted by brad on Sun, 2011-01-09 20:09.
Like me, you probably have a dozen “universal” remote controls gathered over the years. With each new device and remote you go through a process to try to figure out special codes to enter into the remote to train it to operate your other devices. And it’s never very good, except perhaps in the expensive remotes with screens and macros.
The first universal remotes had to do this because they were made after the TVs and other devices, and had to control old ones. But the idea’s been around for decades, and I think we have it backwards. It’s not the remote that should work with any TV, it’s the TV that should work with any remote. I’m not even sure in most cases we need to have the remote come with the TV, though I know they like designing special magic buttons and layouts for each new remote.
It would be trivial for any TV or other device that displays video to figure out exactly what sort of remote you are pointing at it, and then figure out what to do with all its buttons. Since these devices now all have USB plugs and internet connections, they can even get their data updated. With the TV in a remote setting mode (which you must of course reach by the few keys on the TV) a few buttons from any remote should let the TV figure out what it’s seeing. If it can’t figure out the difference it can ask on the screen to push specific buttons until you you see a picture of your remote on the screen and confirm.
If it can’t figure it out, it can still program the codes from any device by remembering. This would let it prompt you “push the button you want to change the channel” and you would push it and it would know. You could also tweak any remotes. But most people would see the very simple interface of “press these keys and we’ll figure out which you have.” Also makes it easy to have more than one device of the same type. But in particular makes it easy to not have so many “modes” where you have to tell the remote you want to control the TV now, then the satellite box, then the stereo, then the dvd player. Instead just tell the TV “ignore the buttons I am about to press” (for example the volume buttons) and tell the stereo to obey them. Or program a button to do different things on different devices — not a macro where a smart remote sends all the codes needed to tell the TV and stereo to switch inputs while turning on the DVD player, but just each box responding in its own way.
For outlying cases, you could tell the user to program their universal remote for some well established old devices. Every universal remote there is can control a Sony TV for example. That makes it sure the TV will know a set of codes.
The TVs and other devices might as well recognize all the infrared keyboards out there while they are at it.
Of course, as TVs figure out how to do this, the remotes can change. They can become a bit more standardized, and instead of trying to figure everything out, they can be the dumb device and the AV equipment can be the smart device. It’s the AV equipment that has storage, a screen, audio and so much more.
You can also train devices to understand there are multiple remotes that belong to some people. For example, the adult remote can be different from the child’s remote, and only the adult remote can see the Playboy channel, and is kept private. The child’s remote can also be limited to a number of hours of TV as I first suggested six years ago at the birth of this blog.
You can even fix the annoying problem of most remote protocols — “on” and “off” are the same button. This makes it very hard to do things like macro control because you can’t be sure what that code can do. You can have a “turn everything off” button that really works (I presume some of the ones out there use hidden non-toggle codes when they can) or codes to do things like switch on the DVD if it’s not already on, switch video and audio inputs to it, and start playing — something many systems have tried to do but rarely done well.
There are a few things to tweak to make sure “IR blasters” work properly. (These are IR outputs found on DVRs which send commands to cable and satellite boxes to change their channel etc. They are a horrible kludge and the best way rid of them are the new protocols that connect the devices up to IP or the new IP over HDMI 1.4, or failing that the badly-done anynet.)
But the key point here is this: Remotes put the smarts in the wrong place.
Submitted by brad on Sat, 2011-01-08 16:36.
The “burning” question for electric cars is how to compare them with gasoline. Last month I wrote about how wrong the EPA’s 99mpg number for the Nissan Leaf was, and I gave the 37mpg number you get from the Dept. of Energy’s methodology. More research shows the question is complex and messy.
So messy that the best solution is for electric cars to publish their efficiency in electric terms, which means a number like “watt-hours/mile.” The EPA measured the Leaf as about 330 watt-hours/mile (or .33 kwh/mile if you prefer.) For those who really prefer an mpg type number, so that higher is better, you would do miles/kwh.
Then you would get local power companies to publish local “kwh to gallon of gasoline” figures for the particular mix of power plants in that area. This also is not very easy, but it removes the local variation. The DoE or EPA could also come up with a national average kwh/gallon number, and car vendors could use that if they wanted, but frankly that national number is poor enough that most would not want to use it in the above-average states like California. In addition, the number in other countries is much better than in the USA.
The local mix varies a lot. Nationally it’s about 50% coal, 20% gas, 20% nuclear and 10% hydro with a smattering of other renewables. In some places, like Utah, New Mexico and many midwestern areas, it is 90% or more coal (which is bad.) In California, there is almost no coal — it’s mostly natural gas, with some nuclear, particularly in the south, and some hydro. In the Pacific Northwest, there is a dominance by hydro and electricity has far fewer emissions. (In TX, IL and NY, you can choose greener electricity providers which seems an obvious choice for the electric-car buyer.)
Understanding the local mix is a start, but there is more complexity. Let’s look at some of the different methods, staring with an executive summary for the 330 wh/mile Nissan Leaf and the national average grid: read more »
- Theoretical perfect conversion (EPA method): 99 mpg-e(perfect)
- Heat energy formula (DoE national average): 37 mpg-e(heat)
- Cost of electricity vs. gasoline (untaxed): 75 mpg-e($)
- Pollution, notably PM2.5 particulates: Hard to calculate, could be very poor. Hydrocarbons and CO: very good.
- Greenhouse Gas emissions, g CO2 equivalent: 60 mpg-e(CO2)
Submitted by brad on Thu, 2011-01-06 16:16.
Like just about everybody, I hate the way travel through airports has become. Airports get slower and bigger and more expensive, and for short-haul flights you can easily spend more time on the ground at airports than you do in the air. Security rules are a large part of the cause, but not all of it.
In this completely rewritten essay, I outline the design on a super-cheap airport with very few buildings, based on a fleet of proto-robocars. I call them proto models because these are cars we know how to build today, which navigate on prepared courses on pavement, in controlled situations and without civilian cars to worry about.
In this robocar airport, which I describe first in a narrative and then in detail, there are no terminal buildings or gates. Each plane just parks on the tarmac and robotic stairs and ramps move up and dock to all its doors. (Catering trucks, fuel trucks and luggage robots also arrive.) The passengers arrive in a perfect boarding order in robocars that dock at the ramps/steps to let them get on the plane through every entrance. Luggage is handled by different robots, and is checked and picked up not in carousels and check-in desks, but at curbs, parking lots, rental car centers and airport hotels.
The change is so dramatic that (even with security issues) people could arrive at airports for flights under 20 minutes before take-off, and get out even faster. Checked luggage would add time, but not much. I also believe you could build a high capacity airport for a tiny fraction of the cost of today’s modern multi-billion dollar edifices. I believe the overall experience would also be more pleasant and more productive for all.
This essay is a long one, but I am interested in feedback. What will work here, and what won’t? Would you love to fly through this airport or hate it? This is an airport designed not to give you a glorious building in which to wait but to get you through it without waiting most of the time.
The airport gets even better when real robocars, that can drive on the streets to the airport, come on the scene.
Give me your feedback on The Robocar Airport.
Key elements of the design include: read more »
Submitted by brad on Fri, 2010-12-31 11:39.
This year, I bought Microsoft Kinect cameras for the nephews and niece. At first they will mostly play energetic X-box games with them but my hope is they will start to play with the things coming from the Kinect hacking community — the videos of the top hacks are quite interesting. At first, MS wanted to lock down the Kinect and threaten the open source developers who reverse engineered the protocol and released drivers. Now Microsoft has official open drivers.
This camera produced a VGA colour video image combined with a Z (depth) value for each pixel. This makes it trivial to isolate objects in the view (like people and their hands and faces) and splitting foreground from background is easy. The camera is $150 today (when even a simple one line LIDAR cost a fortune not long ago) and no doubt cameras like it will be cheap $30 consumer items in a few years time. As I understand it, the Kinect works using a mixture of triangulation — the sensor being in a different place from the emitter — combined with structured light (sending out arrays of dots and seeing how they are bent by the objects they hit.) An earlier report that it used time-of-flight is disputed, and implies it will get cheaper fast. Right now it doesn’t do close up or very distant, however. While projection takes power, meaning it won’t be available full time in mobile devices, it could still show up eventually in phones for short duration 3-D measurement.
I agree with those that think that something big is coming from this. Obviously in games, but also perhaps in these other areas.
Gestural interfaces and the car
While people have already made “Minority Report” interfaces with the Kinect, studies show these are not very good for desktop computer use — your arms get tired and are not super accurate. They are good for places where your interaction with the computer will be short, or where using a keyboard is not practical.
One place that might make sense is in the car, at least before the robocar. Fiddling with the secondary controls in a car (such as the radio, phone, climate system or navigation) is always a pain and you’re really not supposed to look at your hands as you hunt for the buttons. But taking one hand off the wheel is OK. This can work as long as you don’t have to look at a screen for visual feedback, which is often the case with navigation systems. Feedback could come by audio or a heads up display. Speech is also popular here but it could be combined with gestures.
A Gestural interface for the TV could also be nice — a remote control you can’t ever misplace. It would be easy to remember gestures for basic functions like volume and channel change and arrow keys (or mouse) in menus. More complex functions (like naming shows etc.) are best left to speech. Again speech and gestures should be combined in many cases, particularly when you have a risk that an accidental gesture or sound could issue a command you don’t like.
I also expect gestures to possibly control what I am calling the “4th screen” — namely an always-on wall display computer. (The first 3 screens are Computer, TV and mobile.) I expect most homes to eventually have a display that constantly shows useful information (as well as digital photos and TV) and you need a quick and unambiguous way to control it. Swiping is easy with gesture control so being able to just swipe between various screens (Time/weather, transit arrivals, traffic, pending emails, headlines) might be nice. Again in all cases the trick is not being fooled by accidental gestures while still making the gestures simple and easy.
In other areas of the car, things like assisted or automated parking, though not that hard to do today, become easier and cheaper.
Small scale robotics
I expect an explosion in hobby and home robotics based on these cameras. Forget about Roombas that bump into walls, finally cheap robots will be able to see. They may not identify what they see precisely, though the 3D will help, but they won’t miss objects and will have a much easier time doing things like picking them up or avoiding them. LIDARs have been common in expensive robots for some time, but having it cheap will generate new consumer applications.
There will be some gestural controls for phones, particularly when they are used in cars. I expect things to be more limited here, with big apps to come in games. However, history shows that most of the new sensors added to mobile devices cause an explosion of innovation so there will be plenty not yet thought of. 3-D maps of areas (particularly when range is longer which requires power) can also be used as a means of very accurate position detection. The static objects of a space are often unique and let you figure out where you are to high precision — this is how the Google robocars drive.
Security & facial recognition
3-D will probably become the norm in the security camera business. It also helps with facial recognition in many ways (both by isolating the face and allowing its shape to play a role) and recognition of other things like gait, body shape and animals. Face recognition might become common at ATMs or security doors, and be used when logging onto a computer. It also makes “presence” detection reliable, allowing computers to see how and where people are in a room and even a bit of what they are doing, without having to object recognition. (Though as the kinect hacks demonstrate, they help object recognition as well.)
Face recognition is still error-prone of course so its security uses will be initially limited, but it will get better at telling among people.
Virtual worlds & video calls
While some might view this as gaming, we should also see these cameras heavily used in augmented reality and virtual world applications. It makes it easy to insert virtual objects into a view of the physical world and have a good sense of what’s in front and what’s behind. In video calling, the ability to tell the person from the background allows better compression, as well as blanking of the background for privacy. Effectively you get a “green screen” without the need for a green screen.
You can also do cool 3-D effects by getting an easy and cheap measurement of where the viewer’s head is. Moving a 3-D viewpoint in a generated or semi-generated world as the viewer moves her head creates a fun 3-D effect without glasses and now it will be cheap. (It only works for one viewer, though.) Likewise in video calls you can drop the other party into a different background and have them move within it in 3-D.
With multiple cameras it is also possible to build a more complete 3-D model of an entire scene, with textures to paint on it. Any natural scene can suddenly become something you can fly around.
Amateur video production
Some of the above effects are already showing up on YouTube. Soon everybody will be able to do it. The Kinect’s firmware already does “skeleton” detection, to map out the position of the limbs of a person in the view of the camera. That’s good for games but also allows motion capture for animation on the cheap. It also allows interesting live effects distorting the body or making light sabres glow. Expect people in their own homes to be making their own Avatar like movies, at least on a smaller scale.
These cameras will become so popular we may need to start worrying about interference by their structured light. These are apps I thought of in just a few minutes. I am sure there will be tons more. If you have something cool to imagine, put it in the comments.
Happy Seasons to all! and a Merry New Year.
Submitted by brad on Mon, 2010-12-20 17:27.
I’ve written frequently about how driving fatalities are the leading cause of death for people from age 5 to 45, and one of the leading overall causes of death. I write this because we hope that safe robocars, with a much lower accident rate, can eliminate much of this death.
Today I sought to calculate the toll in terms not of lives, but in years of life lost. Car accidents kill people young, while the biggest killers like heart disease/stroke, cancer and respiratory disease kill people when they are older. The CDC’s injury prevention dept. publishes a table of “Years of Potential Life Lost” which I have had it calculate for a lifespan of 80 years. (People who die after 80 are not counted as having lost years of life, though a more accurate accounting might involve judging the average expected further lifespan for each age cohort and counting that as the YPLL.)
The core result of the table though is quite striking. Auto accidents jump to #3 on the list from #7, and the ratios become much smaller. While each year almost a million die from cardiovascular causes and 40,000 from cars, the ratio of total years lost is closer to 4 to 1 for both cardiovascular disease and cancer, and the other leading causes are left far behind. (The only ones to compete with the cars are suicides and accidental poisoning which is much worse than I expected.)
The lesson: Work on safe robocars is even more vital than we might have thought, if you use this metric. It also seems that those interested in saving years of life may want to address the problem of accidental poisoning. Perhaps smart packaging or cheap poison detection could have a very big effect. (Update: This number includes non-intentional drug overdoses and deaths due to side effects of prescription drugs.) For suicide, this may suggest that our current approaches to treating depression need serious work. (For example, there are drugs that have surprising effectiveness on depression such as ketamine which are largely unused because they have recreational uses at higher doses and are thus highly controlled.) And if you can cure cancer, you would be doing everybody a solid.
Note: Stillbirths are not counted here. I would have expected the Perinatal causes to rank higher due to the large number of years erased. If you only do it to 65, thus counting what might get called “productive years” the motor vehicle deaths take on a larger fraction of the pie. Productivity lost to long term disability is not counted here, though it is very common in non-fatal motor vehicle accidents. Traffic deaths are dropping though so the 2009 figures will be lower.
Submitted by brad on Fri, 2010-12-17 12:25.
Passwords are in the news thanks to Gawker media, who had their database of userids, emails and passwords hacked and published on the web. A big part of the fault is Gawker’s, who was saving user passwords (so it could email them) and thus was vulnerable. As I have written before, you should be very critical of any site that is able to email you your password if you forget it.
Some of the advice in the wake of this to users has been to not use the same password on multiple sites, and that’s not at all practical in today’s world. I have passwords for many hundreds of sites. Most of them are like gawker — accounts I was forced to create just to leave a comment on a message board. I use the same password for these “junk accounts.” It’s just not a big issue if somebody is able to leave a comment on a blog with my name, since my name was never verified in the first place. A different password for each site just isn’t something people can manage. There are password managers that try to solve this, creating different passwords for each site and remembering them, but these systems often have problems when roaming from computer to computer, or trying out new web browsers, or when sites change their login pages.
The long term solution is not passwords at all, it’s digital signature (though that has all the problems listed above) and it’s not to even have logins at all, but instead use authenticated actions so we are neither creating accounts to do simple actions nor using a federated identity monopoly (like Facebook Connect). This is better than OpenID too. read more »
Submitted by brad on Thu, 2010-12-16 15:29.
I decided to gather together all my thoughts on how robocars will affect urban design. There are many things that might happen, though nobody knows enough urban planning to figure out just what will happen. However, I felt it worthwhile to outline the forces that might be at work so that urban geographers can speculate on what they will mean. It is hard to make firm predictions. For example, does the ability for a short pleasant trip make people want a Manhattan where everybody can get anywhere in 10 minutes, or does the ability to work or relax during trips make people not care about the duration and lead to more Sprawl? It can go either way, or both.
Read Robocar influence on the future of cities.
In other notes, now that Masdar’s PRT is in limited operation, there are more videos of it. Here is a CNN Report with good shots of the cars moving around. As noted before, the system is massively scaled back, and runs at ground level, underneath elevated pedestrian streets. The cars are guided by magnets but there is LIDAR to look for pedestrians and obstacles.
City of Apple
The designer of Masdar, Foster + Partners, has been retained to design the new “City of Apple” which is going to spring up literally a 5 minute walk from my house. Apple has purchased the large Cupertino tract that was a major HP facility (and which also held Tandem, which HP eventually bought) and a few other companies. This is about a mile from Apple’s main HQ in Cupertino. Speculation about the plan includes a transportation system of some kind, possibly a PRT like in Masdar. However, strangely, there are talks of an underground tunnel between the buildings which makes almost no sense in this area, particularly since I can’t imagine it would be too hard to run elevated guideway along the side of interstate 280 or even on the very wide Stevens Creek Boulevard.
Sadly, aside from Apple, there’s not a lot for the system to visit if it’s to be more than intra-company transport. The Valco mall and the Cupertino Village are popular but Cupertino doesn’t really have a walkable downtown to speak of.
Of course if Apple wants to tear down all the HP buildings and put up a new massive complex, it will be hard to call that a green move. The energy and greenhouse gases involved in replacing buildings are huge. For transportation, robocars could just make use of the existing highway between the two campuses. It’s not even impossible to imagine Apple building its own exits and bridges on the interstate — much cheaper than an underground tunnel.
Submitted by brad on Thu, 2010-12-09 00:01.
There are many fields that people expect robotics to change in the consumer space. I write regularly about transportation, and many feel that robots to assist the elderly will be the other big field. The first successful consumer robot (outside of entertainment) was the Roomba, a house cleaning robot. So I’ve often wondered about how far we are from a robot that can tidy up the house. People got excited with a PR2 robot was programmed to fold towels.
This is a hard problem because it seems such a robot needs to do general object recognition and manipulation, something we’re pretty far from doing. Special purpose household chore robots, like the Roomba, might appear first. (A gutter cleaner is already on the market.)
Recently I was pondering what we might do with a robot that is able to pick up objects gently, but isn’t that good at recognizing them. Such a robot might not identify the objects, but it could photograph them, and put them in bins. The members of the household could then go to their computers and see a visual catalog of all the things that have been put away, and an indicator of where it was put. This would make it easy to find objects.
The catalog could trivially be sorted by when the items were put away, which might well make it easy to browse for something put away recently. But the fact that we can’t do general object recognition does not mean we can’t do a lot of useful things with photographs and sensor readings (including precise weight and other factors) beyond that. One could certainly search by colour, by general size and shape, and by weight and other characteristics like rigidity. The item could be photographed in a 360 view by being spun on a table or in the grasping arm, or which a rotating camera. It could also be laser-scanned or 3D photographed with new cheap 3D camera techniques.
When looking for a specific object, one could find it by drawing a sketch of the object — software is already able to find photos that are similar to a sketch. But more is possible. Typing in the name of what you’re looking for could bring up the results of a web image search on that string, and you could find a photo of a similar object, and then ask the object search engine to find photos of objects that are similar. While ideally the object was photographed from all angles, there are already many comparison algorithms that survive scaling and rotation to match up objects.
The result would be a fairly workable search engine for the objects of your life that were picked up by the robot. I suspect that you could quickly find your item and learn just exactly where it was.
Certain types of objects could be recognized by the robot, such as books, papers and magazines. For those, bar-codes could be read, or printing could be scanned with OCR. Books might be shelved at random in the library but be easily found. Papers might be hard to manipulate but could at least be stacked, possibly with small divider sheets inserted between them with numbers on them, so that you could look for the top page of any collected group of papers and be told, “it’s under divider 20 in the stack of papers.” read more »
Submitted by brad on Mon, 2010-12-06 09:41.
The folks at the SARTRE road train project have issued an update one year into their 3 year project. This is an EU-initiated project to build convoy technology, where a professional lead driver in a truck or bus is followed by a convoy of closely packed cars which automatically follow based on radio communications (and other signals) with the lead. They have released a new video on their progress from Volvo.
I have written before about the issues involved in this project and many of them remain. It’s the easiest way to get a robocar on the highway, but comes with a particularly high risk if it fails — and failure in the earliest stages of robocar projects is very likely.
In the video, some interesting elements include:
- The building of a simulator to test driver attitudes and reactions. Generally quite positive, in that people are happy to trust the driving to the system and the lead driver. This will change a bit in a real car, since a simulator can only do so much.
- The imagine people eating, drinking, listening to music and reading while in the convoys, but they don’t talk about the elephant in the car: sleeping. People doing anything else can quickly take the controls in a problem, but sleepers may not. And there’s also that act that we metaphorically call “sleeping together.”
- Their simulations depict cars leaving the convoy from the middle. However, in this situation it seems you can’t give them too much brake-accelerator control for the difficult task of changing lanes when you are just a few feet from the cars in front and back of you. You must maintain the speed of the train until you have fully left its lane, but that means you can’t do the usual task of changing speed as you enter your new lane. Exit from the trains will need some work. (There are suggestions in the comments that make sense.)
- They expect to have to make legal changes to allow this. However, since it’s an EU initiated project, they have a leg-up on that. This might pave the way for more robocar-friendly laws in Europe.
- While they plan to do a live test by 2012, they are much more cautious on predicting when the trains might be common on the roads.
- They do speculate if a simple robocar function for “stop and go” traffic, which is able to follow the car in front of you at lower speeds, might come first. Indeed, this is pretty easy, and not much more than a smarter version of existing auto-follow cruise control with steering and lane-following added.
- Their main pitch is environmental, as drafting should save decent fuel. However, I think most people will be interested in the time saving, and I’ll be interested in how the public accepts it.
Submitted by brad on Mon, 2010-11-29 15:28.
Two bits of robocar news from last week. I had been following the progress of the Stanford/VW team that was building a robotic Audi TT to race to the top of Pikes Peak. They accomplished their run in September, but only now made the public announcement of it. You can find photos and videos with the press release or watch a video on youtube.
This project began with the team teaching the vehicle to “drift” — make controlled turns while wheels are skidding, something needed on the windy curves and dirt/gravel/pavement mix on the way up to Pikes Peak. Initial impressions were that they had the goal of being a competitor in the famous Pikes Peak Hill Climb — a time trial race to the top by human drivers, the fastest of whom have climbed in in 10 minutes, 3 seconds in major muscle cars. The best standard cars have done it in about 11.5 minutes, and Audi says a stock TT would take a bit under 17 minutes.
The autonomous Audi’s time of 27 minutes, with a top speed of 45mph, is thus a bit disappointing for those who were hoping for some real man vs. machine competition. The team leader, Burkhard Huhnke, downplayed this, saying that the goal was to come to a better understanding of computer controlled cornering and skidding, in order to make better driver assist systems for production vehicles. Indeed, that is a good goal and it is expected that robocar technologies will first appear as driver assist and safety features in production cars.
The actual run was also marred by tragedy when the helicopter filming it crashed.
Earlier, I spoke with James Gosling — more famous as the creator of the Java language — about his role in the project. Gosling knows languages and compilers very well, and he helped the team develop a compiler so the interpreted scripts they were writing in languages like Matlab. Gosling’s compiler was able to run the resulting code around 100x faster than the interpreter, allowing them to do a lot more with less hardware.
There is strong interest in man vs. machine robocar contests. Such contests, aside from setting a great bar for the robots, will demonstrate their abilities to the public and generate strong public interest. This turned out not to be such a contest, but someday a robot will race to the top of Pikes Peak in better than 10 minutes. It will have a bigger engine, and many more sensors than the Audi in this run, which mostly relied on augmented GPS (extra transmitters were put by the roadside for full accuracy.)
A future car will have a complete map in its head of where all road surfaces are, and their characteristics. It will know the physics of the car and the road better than any human driver. The main thing humans will be able to do is use their eyes to judge changing road conditions, but they don’t change very much, and computer vision or sensor systems to make such judgments don’t seem like an impossible project.
Masdar PRT in operation
In other news, the greatly-shrunk Masdar PRT system, built by 2getthere Inc. of the Netherlands, has entered production operation in Masdar, an experimental city project just outside Abu Dhabi. The project only has 2 stops for passengers (and 3 more for cargo) at this point. It runs at ground level, and pedestrians use an artificial level one floor up.
These pods have many robocar features. They use rubber tires and run on open, unmarked pavement, guiding themselves via odometry and sensing magnets embedded every 5 feet or so in the pavement. They also have laser sensors which see obstructions on the roadway and any pedestrians. They will stop for pedestrians, and even follow you if you walk ahead, maintaining a fixed distance. The system is not designed to mix with pedestrians, however, and the control software shuts down the relevant section of the track if passengers exit their vehicle outside a station.
The tracking is accurate enough that, as you can see, the tires have left black trails on the pavement by constantly running in the same place.
Photos and video can currently be found at the PRT Consulting site and this video shows it pulling out of a station. There is only one other video — I hope more will arrive soon.
The economy has scaled Masdar’s plans back greatly. The original plan called for a whole city done one floor up with a network of these proto-robocar PRT pods running underneath, and no traditional cars in the whole city.
Submitted by brad on Tue, 2010-11-23 18:29.
Nissan is touting that the EPA gave the new Leaf a mileage rating of 99mpg “gasoline equivalent”. What is not said in some stories (though Nissan admits it in the press release) is that this is based on the EPA rating a gallon of gasoline as equivalent to 33.7 kwh, and the EPA judging that the car only goes 73 miles on its 24kwh battery.
There is a huge problem with these numbers. If it were possible to convert perfectly, a gallon of gasoline actually has about 36kwh, so possibly the EPA is factoring in the 7% loss of electrical distribution. But in reality it isn’t even remotely possible to convert fuel to electricity perfectly.
I have written and update on comparing gasoline and electricity with more details.
The Department of Energy, for example, offers a number which puts under 13kwh as the energy equivalent of a gallon of gasoline. That’s how many kwh you get out of the plug if you burn coal, gas or oil with roughly the same energy as that gallon of gas. With the DoE’s number, the Leaf is getting a combined mileage of around 36 mpg-equivalent. That’s not a bad number, but there are many gasoline cars that do better than that. Even a Lexus hybrid does similar to that. This is no minor error, it’s a massive one, and it’s highly unlikely that Nissan or the EPA are unaware of it. This gives the impression of an attempt to make the Leaf seem way, way better than it is to promote electric cars. The problem with that is that when people learn the truth, they are going to be unhappy, and will be soured on electric cars, Nissan and the EPA.
Now I will agree that there is justifiable debate over the right way to do this calculation. The DoE works from its calculation of the average efficiency of power plants in the USA. People in areas with more efficient power will do better using electricity than those close to old coal plants (which are the big drag-down here.) The DoE also counts BTUs in nuclear plants (which provide about 20% of U.S. energy) as BTUs even though no fossil fuel is burned and no greenhouse gas is emitted. People must judge for themselves how “dirty” they think nuclear BTUs are, and how to value an electric car in areas where most of the electricity is nuclear. Even harder to judge are the 10% of US kwh that come from hydro. Hydro doesn’t even have BTUs or pollution, though it does come with environmental destruction. If you live in the Pacific Northwest or parts of Canada where most of the power is from hydro, you may judge the 99mpg number as more realistic, though in this case the concept of a gasoline equivalent is stretched pretty thin.
If you live in California, which burns almost no coal and gets most of its power from natural gas, and then nuclear, the real number isn’t as bad as the national average, but it’s still nowhere close to 99mpg. If you live in a place that is almost all-coal, like Utah or New Mexico, electric cars are not so great an idea — their only environmental advantage is that the fuel source is domestic rather than imported, and the coal is burned elsewhere, not right next to you.
There are other electric cars that are more efficient than the Leaf, but the big reality is that to really beat out the 50mph gasoline hybrids you need to make your car lighter.
“But wait,” some people say. We can run our electric car on solar or renewables and all is wonderful! Don’t get me started on this. There are no solar electrons. Installing renewable generation can be a good idea, but you must tie it to the grid for it to work. Not tying solar or wind or other sources to the grid is highly wasteful, because the power is discarded any time the battery is not empty (or worse, not connected.) Grid tie makes the grid greener, and people who do that can feel good about it if they do it well, but it does’t make driving more than a tiny smidgen of a percent greener than it was.
Shame on Nissan and the EPA. I hope that at least, Nissan will only sell the car in places with electricity that is well above average in quality, and refuse to sell it in places where the power is mostly from coal.
Not that I don’t understand the motivation. Had the EPA rated the car with the DoE methodology number of 36mpg, it might well have killed the car at the starting gate. It’s an interesting moral question if it’s right to lie to kickstart a technology which will become better with time. They could also have lobbied for a more reasonable but generous mpg, perhaps derived from the best natural gas plants, which would have offered a number in the 50s. Not nearly as exciting but not a car-killer, though the comparison to the Prius or Insight would not look so good.
It would have been best if they had just developed a new standard, like watt-hours/mile or miles/kwh, and leave it to the press and local power utilities to publish local conversions between “kwh” and gallons. (Not the dealers, they can’t be trusted of course.) It actually would be quite handy if every power utility were to publish, for each zone the local efficiency of the power grid in terms of BTU/kwh or greenhouse effect/kwh.
Update on Chevy Volt: The numbers for the Volt were released. As a plug-in Hybrid that can go 35 miles on its batteries and then has a gasoline engine, they rated it as 97mpg while on the battery (similar false number to the Leaf) and 37mpg while on gasoline. These numbers are actually roughly the same when using electricity at the grid national average.
Sad to say, but if you live in a place where the power comes from coal, the math seems to say you should remove most of the batteries and save the weight.
Submitted by brad on Sat, 2010-11-20 13:58.
You’re driving down the road. You see another car on the road with you that has a problem. The lights are off and it’s dusk. There is something loose that may break off. There’s something left on the roof or the trunk is not closed — any number of things. How do you tell the driver that they need to stop and check? I’ve tried sometimes and they mostly think you are some sort of crazy, driving to close to them, waving at them, honking or shouting. Perhaps after a few people do it they figure it out.
We have a few signals. Oncoming cars flash lights on and off to warn you your lights are off. (Sometimes they are also warning of a speed trap.) High beams means, “I want to pass and you’re impeding the lane” and while many think that’s rude it’s better than tailgating.
We need a signal for “There is a problem with your car, you should check it out.” This signal should be taught in driving schools, and even be on the driving test. A publicity campaign should educate existing drivers.
One proposal that might make sense is the SCUBA signal for “I have a problem.” This is holding your hand flat, palm down, and wiggling it side to side (ie. rotating your wrist.) Then you point to the source of the problem, like your regulator or whatever. (There are specific SCUBA signals for well known problems, like being low on air, nitrogen narcosis etc.)
For this signal you would waggle the hand and then point at the place on the other person’s car. To those untrained, the signal often mean’s “dicey” or uncertain. Shaking of the head could also strengthen the signal.
Anybody have a better signal to propose?
Submitted by brad on Fri, 2010-11-19 01:32.
Today, I was challenged with the question of how well robocars would deal with deer crossing the road. There are 1.5 million collisions with deer in the USA every year, resulting in 200 deaths of people and of course many more deer. Many of the human injuries and crashes have come from trying to swerve to avoid the deer, and skidding instead during the panic.
At present there is no general purpose computer vision system that can just arbitrarily identify things — which is to say you can’t show it a camera view of anything and ask, “what is that?” CV is much better at looking for specific things, and a CV system that can determine if something is a deer is probably something we’re close to being able to make. However, I made a list of a number of the techniques that robots might have to do a better job of avoiding collisions with animals, and started investigating thoughts on one more, the “flying bumper” which I will detail below.
Spotting and avoiding the deer
- There are great techniques for spotting animal eyes using infrared light bouncing off the retinas. If you’ve seen a cheap flash photo with the “red eye” effect you know about this. An IR camera with a flash of IR light turns out to be great at spotting eyes and figuring out if they are looking at you, especially in darkness.
- A large number of deer collisions do take place at dusk or at night, both because deer move at these times and humans see badly during them. LIDAR works superbly in darkness, and can see 100m or more. On dry pavement, a car can come to a full stop from 80mph in 100m, if it reacts instantly. The robocar won’t identify a deer on the road instantly but it will do so quickly, and can thus brake to be quite slow by the time it travels 100m.
- Google’s full-map technique means the robocar will already have a complete LIDAR map of the road and terrain — every fencepost, every bush, every tree — and of course, the road. If there’s something big in the LIDAR scan at the side of the road that was not there before, the robocar will know it. If it’s moving and more detailed analysis with a zoom camera is done, the mystery object at the side of the road can be identified quickly. (Radar will also be able to tell if it’s a parked or disabled vehicle.)
- They are expensive today, but in time deep infrared cameras which show temperature will become cheap and appear in robocars. Useful for spotting pedestrians and tailpipes, they will also do a superb job on animals, even animals hiding behind bushes, particularly in the dark and cool times of deer mating season.
- Having spotted the deer, the robocar will never panic, the way humans often do.
- The robocar will know its physics well, and unlike the human, can probably plot a safe course around the deer that has no risk of skidding. If the ground is slick with leaves or rain, it will already have been going more slowly. The robocar can have a perfect understanding of the timings involved with swerving into the oncoming traffic lane if it is clear. The car can calculate the right speed (possibly even speeding up) where there will be room to safely swerve.
- If the oncoming traffic lane is not clear, but the oncoming car is also a robocar, it might some day in the far future talk to that car both to warn it and to make sure both cars have safe room to swerve into the oncoming lane.
- Areas with major deer problems put up laser sensors along the sides of the road, which detect if an animal crosses the beam and flash lights. A robocar could get data from such sensors to get more advanced warning of animal risks areas.
Getting the deer to move
There might be some options to get the deer to get out of the way. Deer sometimes freeze; a “deer in the headlights.” A robocar, however, does not need to have visible headlights! It may have them on for the comfort of the passengers who want to see where they are going and would find it spooky driving in the dark guided by invisible laser light, but those comfort lights can be turned off or dimmed during the deer encounter, something a human driver can’t do. This might help the deer to move. read more »
Submitted by brad on Mon, 2010-11-15 15:20.
Many people wonder whether robocars will just suffer the curse of regular cars, namely traffic congestion. They are concerned that while robocars might solve many problems of the automobile, in many cities there just isn’t room for more roads. Can robocars address the problems of congestion and capacity? What about combined with ITS (Intelligent Transportation Systems) efforts to make roads smarter for human driven cars?
I think the answer is quite positive, for a number of different reasons. I have added a new Robocar essay:
Traffic Congestion and Capacity with Robocars
In short, a wide variety of factors (promotion of small, single passenger cars, ability to reverse streets during rush-hour, elimination of accidents and irrational congestion-fostering behaviour, shorter headways, metering of road usage and load balancing of roads and several others) could amount to a severalfold increase in the capacity of our roads, with minimal congestion. If you add the ability to do convoys, the increase can be 5 to 10 fold. (About 20-fold in theory.) The use of on-demand pooling into buses over congested sections allows a theoretical (though unlikely) 100-fold increase in highway capacity.
While these theoretical limits are unlikely, the important lesson is that once most of the cars on the roads are robotic, we have more than enough road capacity to handle our current needs and needs well into the future. In general, overcapacity causes building, so in time we’ll start to use it up — and have much larger cities, if we wish them — but unlike today’s roads which add capacity until they collapse from congestion, advanced metering can assure that no road accepts more vehicles than it can handle without major risk of congestion collapse.
Even before most cars are robotic, various smart-road efforts will work to improve capacity and traffic flow. The appearance of robotic safety systems in human driven cars will also reduce accidents and congestion along the way. Free market economist Robin Hanson believes the ability of cities to grow much larger will be one of the biggest consequences of robocar capacity improvements.
Submitted by brad on Sun, 2010-11-14 16:47.
For many years I have had a popular article on what lenses to buy for a Canon DSLR. I shoot with Canon, but much of the advice is universal, so I am translating the article into Nikon.
If you shoot Nikon and are familiar with a variety of lenses for them, I would appreciate your comments. At the start of the article I indicate the main questions I would like people’s opinions on, such as moderately priced wide angle lenses, as well as regular zooms.
If you “got a Nikon camera and love to take photographs” please read the article on what lens to buy for your Nikon DSLR and leave comments here or send them by email to firstname.lastname@example.org. I’m also interested in lists of “what’s in your kit” today.
Submitted by brad on Mon, 2010-11-08 16:43.
I’ve written before about solutions to “range anxiety” — the barrier to adoption of electric cars which derives from fear that the car will not have enough range and, once out of power, might take a very long time to recharge. It’s hard to compete with gasoline’s 3 minute fill-up and 300 mile ranges. Earlier I proposed an ability to quickly switch to a rental gasoline car if running out of range.
A company called EMAV has proposed a self-propelled battery trailer to solve this problem. While I am not sure how real the company is, the idea has value, particularly when it comes to robotics. As I have written, robocars can solve the “range anxiety” problem in several ways; mainly that robots don’t care about how convenient charging is, and people don’t worry about the range of a taxi beyond the current trip. But batteries are still an issue, even there.
The trailer proposal has the car hitch on the small trailer (which has room for cargo as well) and it provides the extra batteries you need when dong a long trip. The trailer is also motorized so it puts no load on the possibly small car that is “towing” it. EMAV imagines you might buy this, keep it charged, and only put it on when you need to do a long trip.
That could work, but presents a few problems. First of all, cars are much less nimble when they have a trailer on them. Backing up is much harder, and in fact novices will get completely stymied by it. You take an extra-long parking space if you can fit at all. There’s also extra drag.
We might solve the maneuvering problem a bit with a mildly robotic trailer that has a link to the car controls, making backups and turns more natural. This can be done either with steerable wheels on the trailer or just independent motor wheels which can be turned at different speeds. Such a trailer might be able to couple much more closely with the car, possibly going right on the tail so that it acts like an extension of the vehicle. This might also solve the parking problem.
Things could also be aided by making the couple and decouple very simple and easy. That’s a tall order because of safety issues, and the need for a high-current wire. The ideal would be an automatic decouple, so you could temporarily drop the trailer off somewhere if you needed to handle roads and parking where a trailer isn’t workable. Even better but harder would be an automatic recouple, obviously requiring some more sophisticated robotics in the trailer, and a fully safe coupling system.
With standardization, trailers like this could be left on lots all over a city. Anybody with a compatible electric car could, if they needed it, stop off at a convenient lot to grab a trailer. (The trailer would also be in a charging station, making automatic coupling even harder.) With the trailer grabbed there would be no range anxiety. The trailer could simply provide power, or it could go further and charge the car at high speed, allowing the trailer to be dropped off at another charging station an hour or so later. (While this sounds nice, battery chemistries may doom this plan, since you now are putting two batteries through heavy use cycles to get one unit of charge into the car, doubling the battery lifetime cost of the energy.)
While eventually trailers would need to get back to their base after one-way trips, there are lots of ways to encourage various drivers to do that. As long as the dropped
trailer is not entirely empty, you can offer drivers who take it back a ride without using their own battery, for example.
This approach might be better than the battery-swap stations planned by “A Better Place.” The Better Place battery swap is cool, but requires all cars that use it be designed around its one particular battery configuration, and that people not own their own batteries. The swap stations are expensive and land intensive, while trailer depots would require nothing but a little land and a charging station for the trailer. A special trailer hitch is a much smaller modification of a car, too.
(One variation of the “PRU” trailer has the trailer contain a diesel generator rather than a battery pack. This of course has the range of liquid fuel, and doesn’t even need a charging station where you drop it of. It’s not being particularly green when used in this fashion of course, a bit worse than a serial hybrid car. If the trailer is heavy enough it could physically push the car and not need an electrical connection to it, though people might get highly confused by steering in such situations.)
As a cheaper and more flexible version of battery swap, this approach could be good for robocars too. Robots, unlike people, will not feel too burdened by the issues of driving a vehicle with a trailer, especially if they can control the trailer’s motors or steering. Parking’s easier too, especially if they can do robotic docking and undocking. While I have written how important it is that people don’t care about the range of a taxi, the owner of a taxi cares about the duty cycle. If they robotic taxi has to spend too much of its time recharging, the return on investment is not nearly as quick. The trailer approach, like the battery swap approach, means downtime only for the batteries, not the vehicle. If the trailers are themselves simple robocars, they can move at low and safe speeds to come meet robocars that need them for a range boost. Even if not, they need not take up much space and they’re easy to scatter everywhere for quick access. Indeed, the car itself might always use a trailer and thus have only enough battery power within it to get from one trailer to the next.
Submitted by brad on Tue, 2010-11-02 12:23.
There’s a problem I have seen at a number of free events, particularly “unconference” events which have a limited capacity. There will be a sign-up list, and once it fills up, people are turned away or get on a waiting list. (Some online ticket services now support the idea of free tickets for this purpose.)
Then you get to the event and 1/3 of the seats are empty. Because it did not cost anything to sign up, people were quite willing to no-show, and many other people signed up “just in case.” Unfortunately many who would have come decided not to go because the event was full.
To counter this, many events have started putting on a small charge “just for the sake of having a charge.” This charge is in the range of $10 to $30. It discourages signing up just in case, and makes people feel a little more strongly that they should come, but it’s not a burden for most people and raises a small amount of money for the event. (Usually such events are really paid for by sponsors or donors.)
Here’s another idea: Set a price for the event and take and authorize a credit card, but only charge the credit cards of the no-shows. This requires some sort of on-site desk where people can register to not get charged (or get a refund if they used another mechanism like paypal, cheuque or cash.)
The big question is, what should the price be? Many factors change as you change the price:
- If the price is very high, you start scaring people away from registering, but you will get very few no-shows.
- If the price is very low, you may still get plenty of no-shows, but now there is at least revenue for it … and empty seats.
- For some price ranges, a large fraction of the crowd may elect not to refund even though they are at the event, either because it’s a hassle, or they feel like donating. They may feel themselves as cheap by going to ask for their $30 back from a non-profit ad-hoc event. This can help pay for the event.
While it will vary based on the type of event and wealth of the crowd, there is probably an optimal price, which can only be found by experimentation, that both comes as close as possible to filling the room and generating the most revenue from no-shows. It is not out of the question that there could be a price which (combined with a subtle pressure on people to donate rather than refund) pays for the conference.
People who plan to no-show could cancel before the event, possibly just a day before if there is a waiting list. People on the waiting list would not have to pay, but could be told on the morning of the event if they are in. A well managed, real-time waiting list with good predictions on whether people will make it can help assure the room is full.
People who are spending other money to get the conference (ie. booking a flight or hotel) might not have to pay, as they have other penalties for not showing. It’s mostly locals who do the “just in case” sign-up.
If anybody tries this, I would be interested in getting reports about the price and how people reacted to it and how many refunded. Slightly harder is figuring out how many people are scared away by the price, even with the refund promise. Events that are free tend to be free for a reason, and this system might not meet those goals.
It would also be nice if ticket services supported this model. It makes sense, as they would get a small cut of any ticket not refunded. Refunds to paypal tend to cost you nothing, though I could see those services getting upset at merchants who are refunding almost all purchases and just using them as a vehicle for free. With cheques, one can also simply not deposit the cheque and even hand it back to the attendee at the conference. But since credit cards and paypal make it so easy, it is tempting to insist on those, and just allow a small fraction of the people to plead that they have no accounts, warning them the exceptions are personally reviewed.
You want to be able to process refunds without a large cost of volunteer or staff time. Of course if there is a registration desk you know who showed up and who didn’t, but most free events don’t want to have such a desk. If everybody uses a credit card, a number of options exist for a self-service desk. For example, they could just swipe the credit card they used at a self-swipe station, as counter-intuitive as “swipe to not be charged” might be. A station which photographs a person’s card or ID could also be self-serve, but requires post-processing.
The web page could also offer a QR code to print, and that printout could be brought and scanned to assure the refund. This could be done by a volunteer’s smartphone, or a self service station with PC and webcam. They need not actually print the code, as cameras can read a QR code from the attendee’s phone screen. Printouts though can also do a pre-printed attendee badge, allowing the person to just cut that out and pick up a badge-holder for it.
This does allow a small amount of cheating, where a no-show asks a friend to print out and show their refund page, but if the fee is low, I doubt there will be much of this. If there is already a staff desk, as most events have, placing the self-serve refund scanner there will discourage people from using it twice just to save a friend some money.
Note that having a refund desk where people have to come in person to ask for their refund will mean that more people decide to donate, so depending on the goals of the event, it may make sense to deliberately not make it trivial to get the refund. Some sponsored events may truly not wish the money, some may be secretly happy for it.
You do want to be sure you are accurate, so that people don’t complain they never got a refund after the fact. Again, I think cheating will be low in this area so it may not be a big concern.
Then, at the end of the conference, send an email to all on their refund status. This allows protests from those who thought they refunded. If the scanner is on-line, it could have emailed about the scan right then and there, and many can see that email right away. For a small amount of money you can also send a text message confirmation; just about anybody can get that.
Submitted by brad on Thu, 2010-10-28 18:52.
I’m at the Pod Car City conference, taking place today and tomorrow in San Jose for PRT developers and customers. Some news tidbits from the conference:
- There were interesting presentations with videos from the three main vendors of working systems: ULTra (Heathrow), 2GetThere (Masdar) and Vectus (Swedish test track and some more.) Some were new videos showing the systems in action at a level not seen before.
- Sebastian Thrun of the Google robocar team gave his first outside talk on that project, with some great videos (not released to public, unfortunately.) Quite impressive to see the vehicle handling all sorts of traffic, even deciding when to cross over the solid line in the middle when it’s clear to avoid getting too close to parked cars, just as human drivers do.
- Sadly, during the public session (before Thrun’s talk) when several audience members sent in questions about the Google cars, both the host from San Jose and the leaders of the 3 PRT companies all punted saying they knew little about them.
- In spite of that there was intense interest in Thrun’s talk, with lots of questions and not nearly as much negativity as is sometimes directed at robocars from the PRT community.
- All vendors punted on my question about the current cost of pods (which external estimates suggest is around $100K since they are made in small quantities.)
- Lots of action in Sweden. Soon, a city will be chosen for a trial PRT system, probably either Stockholm or Uppsala. Then, a company will be picked — most people think it will be Vectus.
- Vectus, which makes a rail-based PRT, will be installing a system in Suncheon City, South Korea, which will be a people mover into the wetlands park there. Vectus showed many films of how well their system handles bad weather, though they are the only ones to use rails.
- In Masdar, one of the biggest challenges has been the oppressive heat, and the power for air conditioning. To make the PRT work, stations must be close as people will simply not walk long distances outside when it’s 40 degrees and humid.
- Interesting note about the rationale that helped sell ULTra at Heathrow: The big advantage is the predictable time of a PRT trip, which normally involves a pod already waiting and a direct trip. Even if that trip is no faster than a parking shuttle, not knowing when the parking shuttle bus will come is a major negative for those going to flights.
- Ron Diridon of the California High Speed Rail board declares that HSR will be a complete failure if there isn’t something like PRT around the HSR stations to disperse people into the towns. He’s half right — HSR is likely to be a big failure, PRT or not, though the PRT would help.
- San Jose is doing intensive study of a PRT to serve the airport, the nearby Caltrain and Light Rail stations, along with parking lots, rental cars and a couple hotels. This might well be useful but still is just a parking shuttle mostly. Few people take Caltrain or light rail to the airport (in spite of the existing free bus) and I doubt a lot more will.
- At the same time, thanks to ULTra, San Jose and other towns are starting to accept PRT as something costing 10-15 million per mile. That’s a lot cheaper than light rail, and in the bay area, hugely cheaper than the 50-year old BART system which people think of as modern.
- Attended a session on lessons from air traffic management for pod management. Interesting stuff but I don’t think that useful for the problem. Planes get spaced by 5 miles and 2,000 feet. Cars and pods will be spaced by tens of feet.
- Attended another session on trying to model passenger loads. This session was much more concerned about surge loads in many markets, where a class might let out and suddenly 100 people are at the PRT station trying to use it, removing the no-wait benefit (and the associated high predictability benefit.) One thing Robocars will probably do better since they have no concept of stations and you can get as many cars into an area as you can fit on the road and take out via it. Planners predict that if PRT waits are long in a campus situation, people will walk instead, but you would never have to walk instead with a robocar — just walk away from the crowd to get one.
- Still too much “transit” oriented thinking in the PRT crowd, I think. In fact, many are hoping to pitch PRT as a feeder which will increase usage of other transit lines. I think transit will fade away in about 25 years.
Vislab completes their autonomous drive to Shanghai
The team from the Vislab autonomous challenge made it to Shanghai, and their cars are now in the Italian pavilion at the World’s Fair. Congratulations to them. They sent me a nice PDF press release. It details elements from their blog about how they almost got a ticket, gave up driving at night, blew through toll booths, picked up hitchhikers, and could not handle crazy drivers.