Submitted by brad on Tue, 2013-09-03 20:18.
I’m back from Burning Man, and this year, for the first time in a while, we didn’t get internet up in our camp, so I only did occasional email checks while wandering other places. And thus, of course, there are many hundred messages backed up in my box to get to. I will look at the most important but some will just be ignored or discarded.
We all know it’s getting harder and harder to deal with email backlog after travel, even connected travel. If you don’t check in it gets even worse. Vacation autoreplies can help a little, but I think they are no longer enough.
Some years ago a friend tried something radical. He had is autoreply say that he was away for 2 months, and could not possibly handle the email upon his return. He said that thus the email you had sent had been discarded. You were told that if it was still important when he returned that you should send it again then. His correspondents were completely furious at the temerity of this action, though it has a lot of attractions. They had taken the time to write an email, and to have it discarded and left in their hands to resend seemed rude. (I believe the reply included a copy of the email at least.)
Worse, because we are always connected, vacation replies sometimes lie. People are scanning their email, responding to the most important ones if they can, even though a vacation autoreply was sent. And so we always hope for that.
I think the time has come for an extra internet protocol as a companion to mail. When you type an E-mail address into your mail client, it should be able to query a server that handles information for that domain — something like an MX record — and query it about the email that is about to be written, including the sender address and recipient address, and possibly a priority. If the recipient is in a vacation mode or other do not disturb mode, the sender would be told immediately, before writing the e-mail. They would have the option of not writing it, writing it for delivery at the designated date in the future, or writing it with various tags about its urgency in case the recipient is doing some checking of mail.
This could be an LDAP derived protocol or something else. Indeed it could be combined, when trusted, with directory lookup and autocomplete directory services. It’s not easy because often (with things like MX) the server that handles mail for a user may not have a strong link to the user in order to serve this data. In the old e-mail regime of store and forward, live connections were not expected. Still, I think it can be done, and it would not be a mandatory thing.
There are some security and privacy implications here that are challenging:
- Spammers will try to use this information to confirm addresses or hunt for them
- This lets the recipient know if somebody just typed in their name to send mail, and when they did so, and thus how long they took to write a mail, or if they aborted one. To avoid this, the directory servers could be trusted 3rd parties.
- This provides a reliable IP address for the sender’s client, or at least a proxy acting for the sender.
- It could be misused to build a general database of many people’s vacation status, invading their privacy, unless there are tools to prevent broad spidering of this sort.
Mail servers would remember who queried, and in fact it might be encouraged to include a header in the email that came from the query, to officially tie them together. This would allow clients to know who queried and who did not, giving priority to messages which came from people who queried and acted upon the result (for example waiting to send) over those who just sent mail without checking. Users could get codes that would allow them to declare the message higher (or lower) priority that would not be available to those who just did plain SMTP.
Mailing lists might also make use of this data, and the response could tell mailing lists what the user wants to do, including temporarily unsubscribing until a given date, or asking for a digest of threads to be sent upon return, or other useful stuff. Responsible corporate bulk mailers could also accept that you don’t want customer satisfaction surveys or useful coupon offers during your vacation and just not send them. Ok, I’m dreaming on that one, perhaps.
For security, it could be that only past correspondents could do this query, or only users with some amount of authentication. Anonymous email and mail from strangers would still be possible, but not with a pre-query. The response could also be sent back via a special email that servers know to intercept, so it can’t be used to gain information that would not be gained by mailing a person today. (You could get a report of people who queried you and never mailed you when not on vacation.)
We might see some features in mailers, like a pop-up in your mailers that says, “Brad just started writing you a message” the way instant messaging programs do. I am not sure this is a good idea, but it would happen. Readers: what other consequences do you see happening?
Submitted by brad on Mon, 2013-08-19 16:59.
Probably the most expensive add-on that people get in their cars today is the stereo. Long ago, cars often came without stereos and there was a major aftermarket. The aftermarket is still here but most people elect for factory stereos which fit in seamlessly with the car and often cost a huge amount of money.
The car’s not a great place to listen to music — it’s noisy and you are distracted and you often stop and have to get out in the middle of a song. But because people find they listen to more music in their cars than at home, they often pay huge bucks for a fancy car stereo. (Not counting the people who deliberately buy a system so loud it’s meant for other people outside the car to hear.)
While you could put a nice stereo system in a robocar, and some people will, another way they can save money is they don’t need to have much audio at all, not once they can do full-auto operation. The prohibition on headphones by the driver should go away, and it could become popular to just use nice headphones — possibly noise cancelling headphones or in-ear noise-blocking phones. A better audio experience with much less noise, and a lot cheaper too. And there is the option for each person in the car to have their own headphones and tune their own audio stream.
People will like to share, so the car might contain a simple audio distribution system to feed audio streams to people who are sharing, though the source of the music should still be somebody’s phone or device, not something built into the car. In addition, there could be a system to mix in some of the in-cabin audio, so you can still hear the other people when they talk. Microphones on each person’s headphones could pick up their voices and actually provide a clearer read of their voices. Headphones with position sensors could allow simulation of stereo on the other people. Alternately a microphone array could exist around the car, particularly at each seat.
There are some downsides to push things into the traditional way:
- Wearing headphones is uncomfortable on long trips
- They are a pain to remember to put on. You want to avoid cords, so they would be wireless, but then you must be sure to put them in their charging dock.
- On small aircraft, there is so much noise that everybody does it this way, but they tend to be bulky (due to the high noise) and unpopular for that reason
So people might elect to still have decent speakers and listen to music without headphones. But there is less need to buy a really expensive sound system, since if you want the top quality you probably want to go for the headphones. This may also apply to decisions to do expensive sound elimination in the car. For some, nothing may change, but that’s OK. What’s interesting is the option to do car sound in ways never done before.
Submitted by brad on Wed, 2013-08-14 13:03.
Frequently, in reporting on robocars, it is often cited that one of their key benefits will be the way they enable car sharing, greatly reducing the number of cars that need to exist to serve the population. It is sometimes predicted that we’ll need to make fewer cars, which is good for the environment.
It is indeed true — robotaxi service, with cars that deliver themselves and drop you off, does greatly enable car sharing. But from the standpoint of modern car sharing, it may enable it too well, and we may end up having to manufacture more cars, not fewer.
Today’s car sharing companies report statistics that they replace around 13 privately owned cars for every car in the carsharing fleet. Some suggest it’s even as high as 20.
This number is impossible for average drivers, however. The average car is driven 12,000 miles/year. To replace 13 average cars would require a vehicle that was actively driving, not just signed out, 11 hours/day and each vehicle would wear out in 1-2 years.
Three things are happening.
- Carsharing is replacing the more marginal, less used vehicles. A household replaces a 2nd or 3rd car. Carsharing is almost always used by people who do not commute by car.
- Carsharing is often considerably less convenient than a private car. It discourages driving, pushing its users into other modes of transport, or selecting for customers who can do that.
- Related to that, carsharing shows the true cost of car ownership and makes it incremental. That cost is around $20/hour, and people rethink trips when they see the full cost laid out per mile or per hour. With private cars, they ignore most of the cost and focus only on the gasoline, if that.
The “problem” with robocars is that they’re not going to be worse than having a private car. In many ways they will be better. So they will do very little of the discouragement of car use caused by present day carshare models. The “dark secret” of carsharing is that it succeeds so well at replacing cars because of its flaws, not just its virtues.
Robotic taxis can be priced incrementally, with per-mile or per-hour costs, and these costs will initially be similar to the mostly unperceived per-mile or per-hour costs of private car ownership, though they will get cheaper in the future. This revelation of the price will discourage some driving, though robotaxi companies, hoping to encourage more business, will likely create pricing models which match the way people pay for cars (such as monthly lease fees with only gasoline costs during use) to get people to use more of the product.
There is an even stronger factor when it comes to robotaxis. A hard-working robotaxi will indeed serve many people, and as such it will put on a lot of miles every year. It will thus wear out much faster, and be taken out of service within 4-5 years. This is the case with today’s human driven taxicabs, which travel about 60,000 miles/year in places like New York.
The lifetime of a robotaxi will be measured almost exclusively in miles or engine-hours, not years. The more miles people travel, the more vehicles will need to be built. It doesn’t matter how much people are sharing them.
The core formula is simple.
Cars made = Vehicle Miles Travelled (VMT) / Car lifetime in miles
The amount of sharing of vehicles is not a factor in this equation, other than when it affects VMT.
Today the average car lasts 200,000 miles in California. To be clear, if you have 8,000 customers and they will travel two billion miles in 20 years (that’s the average) then they are going to need 8,000 cars over those years. It almost doesn’t matter if you serve them with their own private car, and it lasts all 20 years, or if you get 2,000 cars and they serve 4 people each on average and wear out after 5 years. read more »
Submitted by brad on Mon, 2013-08-12 11:20.
I’ve been a little skeptical of many augmented reality apps I’ve seen, feeling they were mostly gimmick and not actually useful.
I’m impressed by this new one from Audi where you point your phone (iPhone only, unfortunately) at a feature on your car, and you get documentation on it. An interesting answer to car user manuals that are as thick as the glove compartment and the complex UIs they describe.
Like so many apps, however, this one will suffer the general problem of the amount of time it takes to fumble for your phone, unlock it, invoke an app, and then let the app do its magic. Of course fumbling for the manual and looking up a button in the index takes time too.
I’ve advocated for a while that phones become more aware of their location, not just in the GPS sense, but in the sense of “I’m in my car” and know what apps to make very easy to access, and even streamline their use. This can include allowing these apps to be right on the lock screen — there’s no reason to need to unlock the phone to use an app like this one. In fact, all the apps you use frequently in your car that don’t reveal personal info should be on the lock screen when you get near the car, and some others just behind it. The device can know it is in the car via the bluetooth in the car. (That bluetooth can even tell you if you’re in another car of a different make, if you have a database mapping MAC addresses to car models.)
Bluetooth transmitters are so cheap and with BT Low Energy they can last a year on a watch battery, so one of the more compelling “Internet of Things” applications — that’s also often a gimmick term — is to scatter these devices around the world to give our phones this accurate sense of place.
Some of this philosophy is expressed in Google Now, a product that goes the right way on many of these issues. Indeed, the Google Now cards are one of the more useful aspects of Glass, which otherwise is inherently limited in its user interface making it harder for you to ask Glass things than it is to ask a phone or desktop.
The car app has some wrinkles of course. Since you don’t always have an iPhone (or may not have your phone even if you own an iPhone) you still need the thick manual, though perhaps it can be in the trunk. And I will wager that some situations, like odd lighting, may make it not as fast as in the video.
By and large, pointing your phone at QR codes to learn more has not caught on super well, in part again because it takes time to get most phones to the point where they are scanning the code. Gesture interfaces can help there but you can only remember and parse a limited number of gestures, so many applications call out for being the special one. Still a special shake which means “Look around you in all ways you can to figure out if there is something in this location, time or camera view that I might want you to process.” Constant looking eats batteries which is why you need such a shake.
I’ve proposed that even though phones have slowly been losing all their physical buttons, I would put this back as a physical button I call the “context” button. “Figure out the local context, and offer me the things that might be particularly important in this context.” This would offer many things:
- Standing in front of a restaurant or shop, the reviews, web site or app of the shop
- In the car, all the things you like in the car, such as maps/nav, the manual etc.
- In front of a meeting room, the schedule for that room and ability to book it
- At a tourist attraction, info on it.
- In a hotel, either the ability to book a room, or if you have a room, hotel services
There are many contexts, but you can usually sort them so that the most local and the most rare come first. So if you are in a big place you are frequently, such as the office complex you work at, the general functions for your company would not be high on the list unless you manually bumped them.
Of course, one goal is that car UIs will become simpler and self-documenting, as cars get screens. Buttons will still do the main functions you do all the time — and which people already understand — but screens will do the more obscure things you might need to look up in the manual, and document it as they go. You obviously can’t ever do something you need to look up in the manual while driving.
There is probably a trend that the devices in our lives with lots of buttons and complex controls and modes, like home electronics, cars and some appliances, will move to having screens in their UIs and thus not need the augmented reality.
Submitted by brad on Mon, 2013-08-05 12:38.
Our technology is having trouble with settling on a name. That’s OK before it’s mainstream but will eventually present a problem. When people in the field are polled on what name they like, there is no clear winner. Let’s look at some of the commonly used candidates:
Recently, this has become the most common term used in the press. There is a “Driverless Car Summit” and the Wikipedia page has used that name for some time.
In spite of this popularity, the term is very rarely used by people actually building the vehicles. Attendees at the “Driverless Car Summit” when polled all said they dislike it. Until recently, the most common news story about a driverless car would say, “then the driverless car rolled down the hill and careened into the other lane, hitting a tree.”
My personal view is that this term is like “horseless carriage.” Long ago the most remarkable thing about the automobile was that it had no horse. Here it’s the lack of driver (or at least lack of action by the driver.) Of course, these cars have something driving them, but it’s a computer system. While this term is most popular, I am confident it will fade away and seem quaint, like horseless carriage did.
This term is popular among developers of the cars. Its main problem is that it’s too long to be a popular term. The acronym SDC is a reasonable one. In web hits, this is tied with Driverless Cars, but falls behind that name in searches and news mentions.
This term was most popular in the early years, though it is most commonly found in research environments and in the military sphere. In the military they also use “unmanned ground vehicle” — another term too unwieldy for the public —though they usually refer to remote controlled vehicles, not self-driving ones.
Annoyingly, the acronym “AV” has another popular meaning today. Most of the terms here are too long to become common use terms, and so will be turned into acronyms or shortened, but this one has an acronym problem.
Automated Road Vehicle
This term has minor traction, almost entirely due to the efforts of Steve Shladover of UC Berkeley. In his view, the word autonomous is entirely misused here and the correct term is automated. Roboticists tend to differ — they have been using “autonomous” to mean “not remote controlled” for many years. There are two meanings of autonomous in common use. One is to be independent of direct control (which these cars are) and the other one, “self-governing” is the one Steve has the issue with. As a member of the program committee for TRB’s conference on the area, he has pushed the “automated” name and given it some traction.
Unfortunately, to roboticists, “automated” is how you describe a dishwasher or a pick-and-place robot; it’s a lower level of capability. I don’t expect this terminology to gain traction among them.
I selected this term for these pages for a variety of reasons. It was already in modest use thanks to a Science Channel documentary on the DARPA challenge called “robocars.”
- Talking to teams, they usually just called their vehicle “the robot” or “the car.”
- It is short, easy to say, and clear about what it means
- It is distinct and thus can easily be found in online searches
- It had some amount of existing use, notably as the title of a documentary on the Science Channel about the DARPA challenges
However, it is doing poorly in popularity and only has about 21,000 web pages using it, so I may need to switch away from it as well if a better term appears. Today it reminds people too much of robotics, and the trend is to move away from that association.
On the other hand, no other term satisfies the criteria above, which I think are very good criteria. read more »
Submitted by brad on Fri, 2013-08-02 10:52.
I’m often asked whether robocars will keep themselves to the speed limit and refuse to go faster, unlike cruise controls which let the driver set the automated speed. In many countries, the majority of human drivers routinely exceed the limit which could present issues. On the other hand, vendors may fear liability over programming their cars to do this, or even programming them to allow their human overlord to demand it.
While the right answer is a speed-limit doctrine like the French Autoroute, where the limit is 130 kph/80 mph and few disobey it, until we can come to that answer, the math suggests that travel might be overall safer if the robocars are allowed to speed in the same way humans do, at the request of humans. And indeed, that is how prototype implementations have been built.
I felt this subject (and related subjects about how cars should deal with laws that are routinely broken by human drivers) deserved a special article. Read about it at:
Robocars and the Speed Limit
Submitted by brad on Wed, 2013-07-31 12:34.
Southwest recently announced a very different approach to providing in-flight entertainment. Partnering with dish network they will offer live TV and on-demand programming over the in-plane WIFI to people’s personal devices. Sadly, for now, it’s just Apple devices. I will presume they will extend this to other platforms, including laptops, soon, and they should consider also allowing you to rent a tablet one-way if you don’t have your own.
Everywhere else, we see airlines putting in “fancy” and expensive in-flight entertainment systems. In coach they use small screens in the headrests, and business class and 1st class seats have fairly large displays. I’ve tried a number of these, and uniformly, in spite of all the money, they suck compared to just having pre-loaded video on your own tablet, laptop or DVD player. Even your phone with its small screen is better. Why?
- Almost every one of the systems I’ve seen has been badly written and underpowered, resulting in atrociously slow response time and poor UI
- The ones that charge you sit there all flight advertising to you if you don’t pay. Clever people can figure out how to turn off the screen, but it doesn’t matter, because most of the other screens are this very distracting synchronized spam video. Worse, during boarding, they turn up the audio on this ad.
- They pause your video for every little announcement, including non-safety announcements, spam to shop duty free or join the FF club, and translations of announcements into other languages. I can almost accept doing this for safety announcements (I would rather take a safety quiz online or at my seat and be free from the routine ones) but if you start your movie before take-off (which is a nice thing to do) you will be interrupted literally dozens of times.
- The video, game and music selections are often quite lame compared to what you can get in any online store for your phone or tablet
- The live TV has advertising in it, and you can’t FF or get up for a snack like at home. Unless its news or sports, why watch live?
- There is often a surprisingly large box under the seat in every seat cluster for the in-flight computer. That takes away foot room and storage space and adds weight to the plane. Plus, why are the boxes so large — consider these devices seem to have far less power than a typical tablet?
- If they have a touchscreen, the guy behind you is always pushing on the back of your seat. Otherwise they have a fairly hard to use hand remote (and for unknown reasons, long latency on button pushes.)
- Disturbingly, movies are often played in the wrong aspect ratio on these screens, and you can’t do anything about it but watch fat characters.
- The small screen ones tend to be fairly low resolution, mostly because they are older. Your phone or tablet is usually not that old and has HD resolution.
That’s a pretty astonishing list of failings. Your own tablet has one main downside — they force you to shut it off on takeoff and landing, for no good reason since tests all show a tablet does not interfere with the plane. It also may have battery limitations, though those are fixed with a USB charge port in the seatback. You do need to bring a stand for it, it would be nice if there were something on the seatback to mount your tablet. You would need an app to do plane-related stuff like the moving map or safety training.
What’s amazing is that all the other airlines have paid a lot of money to install these bad systems, and more to the point carry the weight of them everywhere. This is the classic battle between custom technology, which gets obsolete very quickly, and consumer technology like phones and tablets which are generic but replaced frequently so always modern. The consumer tech will always win, but people don’t realize that.
At first, they might have worried that they needed to provide a screen for everybody. This could easily have been solved with rentals, both out in the terminal and to a lesser extent on-board. Especially if you put in power jacks so recharge is not an issue.
Today the airlines would all be wise to tear out their systems and follow Southwest. I don’t care about the Dish Network streaming that much, but better (and more popular) would be on-board servers which offer a local version of the Google Play store and iTunes store containing the most popular movies and new releases. I venture those companies would be OK with providing that if allowed, and if not, somebody else would.
As a side note, let me say that it would be nice if the online movie stores offered a form of rental more amenable to flying. Most offer a 24 hour rental, which starts when you start playing (so you can download in advance.) However, they don’t offer the ability to start a movie on your flight out and finish it on your flight back. So you dare not start a rental movie unless you are sure you are going to finish it on the flight. Another case where the DRM doesn’t really match what people want to do. (I don’t want to “buy” the movie just to finish it later.)
I will admit one nice feature of the rental is that if I am on a flight, I can watch a movie, and that activates the same 24 hour rental period at home, so those at home can watch it there too. That way, if there is a movie we all wanted to see, we can all see it — if those at home are willing to watch it that particular day.
Submitted by brad on Mon, 2013-07-22 16:08.
A nice result for Vislab of Parma, Italy. They have completed a trial run on public roads using their mostly vision-based driving system. You can see a report on the Vislab site for full details. The run included urban, rural and highway streets. While the press release tries to make a big point that they did this with a vacant driver’s seat, the video shows a safety driver in that seat at all times, so it’s not clear how the test was done. They indicate that the passenger had an emergency brake, and a chase car had a remote shutoff as well.
The Vislab car uses a LIDAR for forward obstacle detection, but their main thrust is the use of cameras. An FPGA-based stereo system is able to build point clouds from the two cameras. Driving appears to have been done in noonday sunlight. (This is easy in terms of seeing things but hard in terms of the harsh shadows.)
The article puts a focus on how the cameras are cheaper and less obtrusive. I continue to believe that is not particularly interesting — lasers will get cheaper and smaller, and what people want here is the best technology in the early adopter stages, not the cheapest. In addition, they will want it to look unusual. Cheaper and hidden are good goals once the cars have been deployed for 5-10 years.
This does not diminish the milestone of their success, making the drive with this sensor set and in these conditions.
Submitted by brad on Mon, 2013-07-22 13:27.
Had my second RAID failure last week. In the end, things were OK but the reality is that many RAID implementations are much more fragile than they should be. Write failures on a drive caused the system to hang. Hard reset caused the RAID to be marked dirty, which mean it would not boot until falsely marked clean (and a few other hoops,) leaving it with some minor filesystem damage that was reparable. Still, I believe that a proper RAID-like system should have as its maxim that the user is never worse off because they built a RAID than if they had not done so. This is not true today, both due to fragility of systems, and the issues I have outlined before with deliberately replacing a disk in a RAID, where it does not make use of the still-good but aging old disk when rebuilding the replacement.
A few years ago I outlined a plan for disks to come as two-packs for easy, automatic RAID because disks are so cheap that everybody should be doing it. The two-pack would have two SATA ports on it, but if you only plugged in one, it would look like a single disk, and be a RAID-1 inside. If you gave it a special command, it could look like other things, including a RAID-0, or two drives, or a JBOD concatenation. If you plugged into the second port it would look like two disks, with the RAID done elsewhere.
I still want this, but RAID is not enough. It doesn’t save you from file deletion, or destruction of the entire system. The obvious future trend is network backup, which is both backup and offsite. The continuing issue with network backup is that some people (most notably photographers and videographers) generate huge amounts of data. I can come back from a weekend with 16gb of new photos, and that’s a long slog over DSL with limited upstream for network backup. To work well, network backup also needs to understand all databases, as a common database file might be gigabytes and change every time there is a minor update to a database record. (Some block-level incrementalism can work here if the database is not directly understood.)
Network backup is also something that should be automatic. There are already peer-to-peer network backups, that make use of the disks of friends or strangers (encrypted of course) but it would be nice if this could “just happen” when any freshly installed computer unless you turn it off. The user must keep the key stored somewhere safe, which is not zero-UI, though if all they want is to handle file-deletion and rollback they can get away without it.
Another option that might be interesting would be the outdoor NAS. Many people now like to use NAS boxes over gigabit networks. This is not as fast as SATA with a flash drive, or RAID, or even modern spinning disk, but it’s fast enough for many applications.
An interesting approach would be a NAS designed to be placed outdoors, away from the house, such as in the back corner of a yard, so that it would survive a fire or earthquake. The box would be waterproof and modestly fireproof, but ideally it is located somewhere a fire is unlikely to reach. It could either be powered by power-over-ethernet or could have its own power and even use WIFI (in which case it is only suitable for backup, not as a live NAS.)
This semi-offsite backup would be fast and cheap (network storage tends to be much more expensive than local drives.) It would be encrypted, of course, so that nobody can steal your data. Encryption would be done in the clients, not the NAS, so even somebody who taps the outside wire would get nothing.
This semi-offsite backup could be used in combination with network backup. Large files and new files would be immediately sent to the backyard backup. The most important files could then go to network backup, or all of them, just much more slowly.
A backyard backup could also be shared by neighbours, especially on wifi, which might make it quite cost effective. Due to encryption, nobody could access their neighbour’s data.
If neighbours are going to cooperate, this can also be built by just sharing servers or NAS boxes in 2 or more houses. This provides decent protection and avoids having to be outside, but there is the risk that some fires burn down multiple houses depending on the configuration.
A backyard backup would be so fast that many would reverse what I said above, and have no need for RAID. Files would be mirrored to the backyard backup within seconds or minutes. RAID would only be needed for those who need to have systems that won’t even burp in a disk failure (which is a rare need in the home) or which must not lose even a few minutes of data.
Submitted by brad on Thu, 2013-07-18 19:56.
This week I attended the Transportation Research Board Workshop on Automated Road Vehicles which has an academic focus but still has lots of industry-related topics. TRB’s main goal is to figure out what various academics should be researching or getting grants for, but this has become the “other” conference on robocars. Here are my notes from it.
Bryant Walker Smith told of an interesting court case in Ontario, where a truck driver sued over the speed limiter put in his truck and the court ruled that the enforced speed limiter was a violation of fundamental rights of choice. One wonders if a similar ruling would occur in the USA. I have an article pending on what the speed limit should be for robocars with some interesting math.
Cliff Nass expressed skepticism over the ability to have easy handover from self-driving to human driving. This transfer is a “valence transfer” and if the person is watching a movie in a tense scene that makes her sad or angry, she will begin driving with that emotional state. More than one legal scholar felt that quickly passing control to a human in an urgent situation would not absolve the system of any liability under the law, and it could be a dangerous thing. Nass is still optimistic — he notes that in spite of often expressed fears, no whole field has been destroyed because it caused a single fatality.
There were reports on efforts in Europe and Japan. In both cases, government involvement is quite high, with large budgets. On the other hand, this seems to have led in most cases to more impractical research that suggests vehicles are 1-2 decades away.
Volkswagen described a couple of interesting projects. One was the eT! — a small van that would follow a postman around as he did his rounds. The van had the mail, and the postman did not drive it but rather had it follow him so he could go and get new stacks of mail to deliver. I want one of those in the airport to have my luggage follow me around.
VW has plans for a “traffic jam pilot” which is more than the traffic jam assist products we’ve seen. This product would truly self-drive at low speeds in highway traffic jams, allowing the user to not pay attention to the road, and thus get work done. In this case, the car would give 10 seconds warning that the driver must take control again. VW eventually wants to have a full vehicle which gives you a 10 minute warning but that’s some distance away. read more »
Submitted by brad on Thu, 2013-07-11 21:06.
The Vislab team from Parma, Italy, which you may remember did the intermittently autonomous drive from Italy to Shanghai a couple of years ago is back with a new vehicle, dubbed BRAiVE which tomorrow begins testing on real urban streets.
The difference is this car is mostly based on vision systems, the specialty of Vislab. You can see a photo gallery of the car but it deliberately does not look particularly different. You can see a few low profile sensors. They claim the car uses “mostly cameras” so it’s not clear if there is still a LIDAR on the vehicle or it’s just cameras and radar. The cars to Shanghai used an array of both cameras and single plane LIDARs. It is said that the sensors are “low cost” though an exact list is not given.
This will be an interesting experiment. Previous vision based systems have not proven adequate for urban driving. They have been able to do it but not reliably enough to trust people’s lives to it. Cameras remain attractive for their low cost and other reasons outlined in my recent article on LIDAR vs. cameras.
The sensors on this vehicle are not that obvious. There remain two schools of thought on this. One believes that a significant change in the car form factor with obvious sensors will be a turn-off for buyers. Others think buyers, especially early adopters, will actually consider unusual looking sensors a huge plus, wanting the car to stand out. I’m in the latter camp, and think the Prius is evidence of this. Its unusual shape outsells all other hybrids combined, even the more ordinary looking Camry hybrid, where the Camry is the best selling car there is. However, there will be markets for both designs.
It will be interesting to see the results of this research, and what rates of accuracy they gain for their vision system. Lots of competing approaches is good for everybody.
Submitted by brad on Tue, 2013-07-02 10:42.
BART, one of the SF Bay Area’s transit systems, is on strike today, and people are scrambling for alternatives. The various new car-based transportation companies like Uber, Lyft and Sidecar are all trying to bump their service to help with the demand, but in the future I think there will be a much bigger opportunity for these companies.
The average car has 1.47 people in it, and the number is less on urban commutes. Since most cars hold 4-5 people, the packed roads have a huge amount of excess capacity in empty seats. While Lyft and Sidecar call themselves ridesharing companies, they are really clever hacks at providing taxi service. Lyft’s original product, Zimride, is more ridesharing but aimed at the long-distance market. Many companies have tried to coordinate true ridesharing for commuters and people in a city, but with only limited success.
A transit strike offers an interesting opportunity. Without commenting on the merits of the sides in the strike, the reality is that we can do much better with the empty seat resource than we do, and a transit strike can prompt that.
Of course, the strike is already naturally increasing carpooling, and casual carpooling (also known as slugging) also gets a large boost. In the Bay Area, things are complicated because BART is the main alternative to the Bay Bridge, and that bridge is going to get very heavily loaded. Ferry service is increasing but it’s still a 25 minute trip every 45 minutes from the various Ferry docks. The bridge and highways are increasing incentives for HOV-3+ carpools.
Casual carpooling tends to only get you to a rough area near your destination. In this case that may be OK, as other transit is still running, only BART is out. At the semi-official casual carpool stations, there are signed waiting places that get long lines for all the general destinations. You take what you can get, and it’s also efficient in moving cars in and out.
Computer assisted carpooling could schedule people together who are both starting and ending their trip fairly close together, for maximum convenience and efficiency. If the trip starts at people’s houses, or some common point, you don’t have the casual carpool concentration issue. If you start from stops of the transit lines which are running, you still have a problem.
Because of the load on the bridge, the ferry seems attractive, though there you have a chokepoint, particularly in picking up people from the boat. To do that, you would need a parking lot with numbered spaces. People allocated to a car because of a common destination would be given a spot number, and walk to the car there as they get off the ferry. A simple curb (which suffices for casual carpools) would not be enough.
Companies like Lyft and Sidecar make use of people who want to become part-time taxi drivers. While they pretend (for legal reasons) that they are people who were “already going that way” who take along others for a donation, that fiction could become reality in a transit strike. Most carpoolers would probably take along extras for no money, or gas money, especially when they gain a special carpool lane or toll saving as they do on the Bay Bridge. There would also be value in Jitney service, where a “professional” driver (who is just driving for the money, officially or not) takes 3-4 passengers along the common route, and they all pay a reasonable share.
Within a city, that share could be competitive, even with the subsidized cost of transit, which tends to be close to $2/ride. Taxi fares are $2.50/mile plus a flag drop, which means a trip of 3-4 miles could be competitive if split among 4 people, and not that bad (considering the higher level of service) even on trips that are twice as long. (The Bay Bridge is 10 miles long so taxi fares will have a hard time competing with even the higher BART fare.)
Jitney service (shared door to door or on-demand fixed route) is quite popular outside the USA, and indeed there are many cities with active private transit systems and jitney systems. But most Americans are not interest in the inconvenience of going slightly out of their way to deal with the needs of other passengers, and so attempts at such rideshare here don’t rule the world. It’s probably too late for this strike, but the next transit strike might end up demonstrating there are other systems aside from transit that are efficient and cost-effective.
The interface would not be too different from existing systems, except people would specify how much inconvenience they would tolerate from having others in the vehicle and going out of their way, in exchange for savings.
When it comes to robocars, this might happen as well, and it could even happen with vans to provide a very effective shared system that still offers door-to-door. Robocars also offer the potential for mixed-mode vanpool trips. In such a trip, a single person robocar takes you to a parking lot, where 12 other people all arrive within the same minute and you call get into a van. The van does the bulk of the trip, and stops near your set of destinations in a parking lot where a set of small single-person robocars sit waiting to take people the last mile. This highly efficient mode should be able to beat any existing transit because of its flexibility and door-to-door service. The vans offer the ability to be luxury vans, with business class seats with privacy screens, so that upscale transit is also possible.
Submitted by brad on Sun, 2013-06-30 12:50.
Yahoo announced that in a few days they will shut down the altavista web site. This has prompted a few posts on the history of internet search, to which I will add an anecdote.
The first internet search engine predated the “web” and was called Archie search engine. Archie (an archive search) was basic by today’s standards. The main protocol for getting files on the internet in those days was FTP. Many sites ran an open FTP server, which you could connect to and download files from. If you had files or software to share with people, you put it up on an FTP server, in particular one that allowed anonymous login to get public files. The Archie team (from Montreal) built a tool to go to all the open servers, read their indexes and generate a database. You could then search, and get a pointer to all the places you could get a file. It was hugely popular for the day.
(You will probably note that this is almost exactly the way Napster worked, the only difference being that Napster was a bit more sophisticated and people used it to share files that were copyrighted. FTP servers had copyrighted material, but mostly they had open source software and documents.)
Around the same time, a lot of folks were building full-text search engines for use on large collections of documents. You could find these on private databases around the world, and the WAIS protocol was developed by Brewster Kahle to make a standardized interface to text search and his own text search tools.
Not long after the web started to grow, Fuzzy Mauldin at CMU made Lycos which was a full-text search engine applied to documents gathered from the web. The ability to search the web generated much attention, and a few other competing spiders and search engines appeared. Everybody had a favourite. (To add to my long list of missed opportunities, in April of 95 I wrote a few notes to Fuzzy looking to get his spider index so we could sort web pages based on how many incoming links they had. Nothing ever came of that but as you may know that concept later had some value. :-) And I also turned down a $4M offer from Lycos to buy ClariNet (which would have turned into $40M when their stock shot up in the bubble. Sigh.)
In 1995, for many people that favourite changed to Alta Vista, a new search engine from Digital Equipment Corp. DEC was a huge name at the time, the biggest name in minicomputers, and it was just losing the Unix crown to Sun. The team at DEC put a lot of computing power into Alta Vista, and so it had two useful attributes. First, they spidered a lot more pages, and thus were more likely to find stuff. They were also fast compared to most of the other engines. In a precursor to other rapid turnarounds in the internet business, you could switch your favourite search engine in a heartbeat and many did. It was big and fast due to DEC putting a lot of fancy computer hardware on it, and DEC eventually justified the money they were spending on it (there was no revenue for search in those days) by saying it showed off just how powerful DEC’s computers with big address spaces were. Indeed the limits of Alta Vista were the limits of the architecture, using the 64-bit Alpha to address 130gb of RAM and 500gb of disk — huge for the day.
On Alta Vista’s home page, they gave you a sample query to type in the search box, to show you how to use it. That query was:
kayak sailing “san juan islands”
Indeed, if you typed that, you got a nice array of pages which talked about kayaking up in the San Juan islands, tour operators, etc. — just what you wanted to get from a query.
My devious mind wondered, “what if I put up a page on my own web site with this as the title?” I created the Kayak Sailing “San Juan Islands” home page on the rec.humor.funny site, which was already a very popular site in those days. (Indeed it’s around 1995 that RHF fell behind Yahoo as the most widely read thing on the internet, but that refers to the USENET group, not the page.)
You will note as you look at the page that it contains the words in the title and headers, and repeated many times in invisible comments. In those days the search engines were ranking higher simply based on where words were, and if they were repeated many times. So I gave it a whirl. This was an early attempt at what is now called “black hat search engine optimization” though I was doing it for fun, rather than nefarious gain.
The results didn’t change though. Alta Vista relied on huge computer power, but it only rebuilt the index by hand. It would be a month or more before Alta Vista recalculated its index. One day I went to type in the query and bingo — there was my page on the first page of search results. Along with a dozen other people who had tried the same thing, and a few pages that were articles writing about Alta Vista and giving the example query, or which were copying its search page which of course had that string.
More to the point, not a single item on the results page was about actual Kayaking! The sample query was ruined, though the results were quite amusing. Not long after, Alta Vista changed the example to Pizza “deep dish” Chicago and of course I added it to my page as well. So not much longer after that AV switched to showing different examples from a rotating and changing collection so people could not play this game any more.
While Alta Vista ruled Search, in spite of efforts from Infoseek, Inktomi/Hotbot and others, we all know that a few years later, Google was born at Stanford, and it proved again how quickly people could switch to a new favourite search engine, and lives under that fear (but with great success) to this day. And Google’s dominance turned SEO into a giant industry.
Submitted by brad on Wed, 2013-06-26 14:25.
I always feel strange when I see blog and social network posts about the death of a pet or even a relative. I know the author but didn’t know anything about the pet other than that the author cared.
So as I report the end for our kitty, Bijou, I will make it interesting by relaying a fun surveillance related story of how she arrived at our house. She had been rescued as a stray by a distant relative. When that relative died there was nobody else to take the cats, so we took two of them, even though the two would have nothing to do with each-other. Upon arrival at our house, both cats discovered that the garage was a good place to hide, but the hiding was quite extreme, and after about 4 days we still could not figure where Bijou was hiding. Somebody was coming to eat the food, but we could not tell from where.
I had a small wireless camera with an RF transmitter on it. So I set it up near the food bowl, and we went into the TV room to watch. As expected, a few minutes later, the cat emerged — from inside the bottom of the washing machine through a rather small hole. After emerging she headed directly and deliberately to the camera and as she filled the screen, suddenly the view turned to distortion and static. It was the classic scene of any spy movie, as shot from the view of the surveillance camera. The intruder comes in and quickly disables the camera.
What really happened is that the transmitter is not very powerful and you must aim the antenna. When a cat sees something new in her environment, her first instinct is to come up to it and smell it, then rub her cheek on it to scent-mark it. And so this is what she did, bumping the antenna to lose the signal, though it certainly looked like she was the ideal cat for somebody at the EFF.
It’s also a good thing we didn’t run the washing machine. But I really wish I had been recording the video. Worthy of Kittywood studios.
She had happy years in her new home (as well as some visits to her old one before it was sold) and many a sunbeam was lazily exploited and evil bright red dot creature never captured, but it could not be forever.
RIP Bijou T. Cat, 199? - 2013
Submitted by brad on Wed, 2013-06-19 11:14.
The AUVSI summit on “driverless” cars last week contained 2 days of nothing but robocars, and I reported on issues regarding Google and policy in part 1.
As noted, NHTSA released their proposal for how they want to regulate such vehicles. In it, they defined levels 0 through 4. Level 2 is what I (and GM) have been calling “super cruise” — a car which can do limited self driving but requires constant human supervision. Level 3 is a car which can drive without constant attention, but might need to call upon a human driver (non-urgently) to handle certain streets and situations. Level 4 is the fully automatic robocar.
Level 2 issues
Level 2 is coming this year in traffic jams in the Mercedes S and the BMW 5, and soon after from Audi and Volvo. GM had announced super cruise for the 2015 Cadillac line but has pulled back and delayed that to later in the decade. Nonetheless the presentation from GM’s Jeremy Salinger brought home many of the issues with this level.
GM has done a number of user studies in their super cruise cars on the test track. And they learned that the test subjects very quickly did all sorts of dangerous things, definitely not paying attention to the road. They were not told what they couldn’t do, but subjects immediately began texting, fiddling around in the back and even reading (!) while the experimenters looked on with a bit of fear. No big surprise, as people even text today without automatic steering, but the experimental results were still striking.
Because of that GM is planning what they call “countermeasures” to make sure this doesn’t happen. They did not want to say what countermeasures they liked, but in the past, we have seen proposals such as:
- You must touch the wheel every few seconds or it disengages
- A camera looks at your eyes and head and alerts or disengages if you look away from the road for too long
- A task for your hands like touching a button every so often
The problem is these countermeasures can also get annoying, reducing the value of the system. It may be the lack of ability to design a good countermeasure is what has delayed GM’s release of the product. There is a policy argument coming up about whether level 2 might be more dangerous than the harder levels 3 and above, because there is more to go wrong with the human driver and the switches between human and machine driving. (Level 4 has no such switches, level 3 has switches with lots of warning.)
On the plus side, studies on existing accidents show that accident-avoidance systems, even just forward collision avoidance, have an easy potential for huge benefits. Already we’re seeing a 15% reduction in accidents in some studies just from FCA, but studies show that in 33% of accidents, the brakes were never applied at all, and only in just 1% of accidents were the brakes applied with full force! As such, systems which press the brakes and press them hard when they detect the imminent accident may not avoid the accident entirely, but they will highly reduce the severity of a lot of accidents. read more »
Submitted by brad on Sun, 2013-06-16 09:38.
I was sadly informed this morning by Ann Lowson that transportation pioneer Martin Lowson has fallen to a stroke this weekend.
Martin had an amazing career but it was more amazing that he was still actively engaged at age 75. We shared a panel last month in Phoenix at the people-mover conference and continued our vigourous debate on the merits of cars like his on closed guideways compared to robocars.
His career included leading a large team on the Apollo project, and building the world’s fastest helicopter, as well as faculty positions at Bristol, and you can read some about it here. For me, his big contribution was to found the ULTra PRT company, the first to commercially deploy a PRT. It runs today at Heathrow, moving people between the terminal and the business parking lot.
PRT was conceived 50 years ago, and many, including Martin and myself, were fascinated by the idea. More recently, as readers know, I decided the PRT vision of personal transportation could be realized on city streets by robocars. It’s easier to do it today on dedicated guideway, but the infrastructure costs tell me the future lies off the guideway.
That doesn’t diminish the accomplishment of being the first to make it work on the guideway. ULTra uses small cars on rubber tires, not a train on rails. They are guided by a laser rangefinder and are fully automated, with no steering wheel.
Last year I invited Martin in to give a talk to Google’s car team, and he got a ride in the car, which he quite enjoyed, even though it didn’t convince him that they were the future. But unlike other skeptics, I gave him the deepest respect for his skill and experience. People who can found companies and lead engineering and public acceptance breakthroughs while senior citizens are a very rare thing, and the world will miss him.
Submitted by brad on Sat, 2013-06-15 15:43.
This week I attended AUVSI’s “Driverless Car Summit” in Detroit. This year’s event, the third, featured a bigger crowd and a decent program, and will generate more than one post.
I would hardly call it a theme, but two speakers expressed fairly negative comments about Google’s efforts, raising some interesting subjects. (As an important disclaimer, the Google car team is a consulting client of mine, but I am not their spokesman and the views here do not represent Google’s views.)
The keynote address came from Bryan Reimer of MIT, and generated the most press coverage and debate, though the recent NHTSA guidelines also created a stir.
Reimer’s main concern: Google is testing on public streets instead of a test track. As such it is taking the risk of a fatal accident, from which the blowback could be so large it stifles the field for many years. Car companies historically have done extensive test track work before going out on real streets. I viewed Reimer’s call as one for near perfection before there is public deployment.
There is a U-shaped curve of risk here. Indeed, a vendor who takes too many risks may cause an accident that generates enough backlash to slow down the field, and thus delay not just their own efforts, but an important life-saving technology. On the other hand, a quest for perfection attempts what seems today to be impossible, and as such also delays deployment for many years, while carnage continues on the roads.
As such there is a “Goldilocks” point in the middle, with the right amount of risk to maximize the widescale deployment of robocars that drive more safely than people. And there can be legitimate argument about where that is.
Reimer also expressed concern that as automation increases, human skill decreases, and so you actually start needing more explicit training, not less. He is as such concerned with the efforts to make what NHTSA calls “level 2” systems (hands off, but eyes on the road) as well as “level 3” systems (eyes off the road but you may be called upon to drive in certain situations.) He fears that it could be dangerous to hand driving off to people who now don’t do it very often, and that stories from aviation bear this out. This is a valid point, and in a later post I will discuss the risks of the level-2 “super cruise” systems.
Maarten Sierhuis, who is running Nissan’s new research lab (where I will be giving a talk on the future of robocars this Thursday, by the way) issued immediate disagreement on the question of test tracks. His background at NASA has taught him that you “fly where you train and train where you fly” — there is no substitute for real world testing if you want to build a safe product. One must suspect Google agrees — it’s not as if they couldn’t afford a test track. The various automakers are also all doing public road testing, though not as much as Google. Jan Becker of Bosch reported their vehicle had only done “thousands” of public miles. (Google reported a 500,000 mile count earlier this year.)
Heinz Mattern, research and development manager for Valeo (which is a leading maker of self-parking systems) went even further, starting off his talk by declaring that “Google is the enemy.” When asked about this, he did not want to go much further but asked, “why aren’t they here? (at the conference)” There was one Google team employee at the conference, but not speaking, and I’m not am employee or rep. It was pointed out that Chris Urmson, chief engineer of the Google team, had spoken at the prior conferences. read more »
Submitted by brad on Sun, 2013-06-09 18:13.
I’m off for AUVSI’s “Driverless Car Summit” in Detroit. I attended and wrote about last year’s summit, which, in spite of being put on by a group that comes out of the military unmanned vehicle space, was very much about the civilian technology. (As I’ve said before, I have a dislike for the term “driverless car” and in fact at the summit last year, the audience expressed the same dislike but could not figure out what the best replacement term was.)
I’ll be reporting back on events at the summit, and making a quick visit to my family in Toronto as well. I will also attend the Transportation Research Board’s conference on automated vehicles at Stanford in July.
Then I’m back for the opening of our Singularity University Graduate Studies Program for 2013 at NASA Ames Research Park this coming weekend. My students will get some fun lectures on robocars, as well as many other technologies. Early bird tickets for the opening ceremony are still available.
On June 20, I will give a talk at a meeting of the new Silicon Valley Autonomous Vehicle Enthusiasts group. This group has had one talk. The talks are being hosted at Nissan’s new research lab in Silicon Valley, where they are researching robocars. I just gave a 10 minute version of my talk at Fujitsu Labs’ annual summit last week, this will be the much longer version!
SU consumes much of my summer. In the fall, you’ll see me giving talks on Robocars and other issues in Denmark, London, Milan as well as at our new Singularity University Summit in Budapest in November, as well as others around the USA.
Submitted by brad on Thu, 2013-06-06 14:20.
There have been a wide variety of announcements of late giving the impression that somebody has “solved the problem” of making a robocar affordable, usually with camera systems. It’s widely reported how the Velodyne LIDAR used by all the advanced robocar projects (including Google, Toyota and many academic labs) costs $75,000 (or about $30,000 in a smaller model) and since that’s more than the cost of the car, it is implied that is a dead-end approach.
Recent stories include a ride in MobilEye’s prototype car by the New York Times, a number of reports of a claim from the Oxford team (which uses LIDAR today) that they plan to do it for just $150 and many stories about a Romanian teen who won the Intel science fair with a project to build a cheaper self-driving car.
I have written an analysis of the issues comparing LIDARS (which are currently too expensive, but reliable in object detection) and vision systems (which are currently much less expensive, but nowhere near reliable enough in object detection) and why different teams are using the different technologies. Central is the question of which technology will be best at the future date when robocars are ready to be commercialized.
In particular, many take the high cost of the Velodyne, which is hand-made in small quantities, and incorrectly presume this tells us something about the cost of LIDARs a few years down the road, with the benefits of Moore’s Law and high-volume manufacturing. Saying the $75,000 LIDAR is a dead-end is like being in 1982, and noting that small disk drives cost $3,000 and declaring that planning for disk drive storage of large files is a waste of time.
Cameras or Lasers in the robocar
I will add some notes about Ionut Budisteanu, the 19-year old Romanian. His project was great, but it’s been somewhat exaggerated by the press. In particular, he mistakenly calls LIDAR “3-D radar” (an understandable mistake for a non-native English speaker) and his project was to build a lower-cost, low-resolution LIDAR, combining it with cameras. However, in his project, he only tested it in simulation. I am a big fan of simulation for development, learning, prototyping and testing, but alas, doing something in simulation, particularly with vision, is just the first small step along the way. This isn’t a condemnation of Mr. Budisteanu’s project, and I expect he has a bright future, but the press coverage of the event was way off the mark.
Submitted by brad on Thu, 2013-05-30 13:42.
Today the National Highway Transportation Safety Agency (NHTSA) released their plan on regulation of automated vehicles, a 14 page document on various elements of the technology and how it might be regulated.
No regulations yet of course, but a message that they plan to look hard at the user interface, particularly on the handoff between a human driver and the system. All reasonable stuff. They define 4 levels of autonomy (similar to prior lists) and say they don’t expect full unmanned operation for some time, and discourage states from even making it legal to use level 3 (where the driver can do another task) by ordinary folks yet — only testing should be allowed.
It’s good that NHTSA is studying this, and they seem to understand that it’s too early to write regulations. It’s pretty easy to make mistakes if you write regulations before the technologists have even figured out what they intend. For example this document, as well as some Nevada law documents, suggested regulations that required the vehicle to hand over control if the driver used the wheel, brakes or accelerator. Yet by another logic, if the driver kicks the gas pedal by mistake and does not have her hands on the wheel, would we want the law to demand that the system relinquish the wheel, causing the vehicle to leave the lane if the driver doesn’t get on it quickly?
At this point their goal is lots of research on what to do, and that’s reasonable. Of course, the sooner the vehicles can get on the road safely, the sooner they can save lives, and NHTSA understands that. I also hope that the laws will not push small players out of the market entirely, as long as they can test and demonstrate safety as well as the bigger players.