Topic

Otto and self-driving trucks -- what do they mean?

Today sees the un-stealthing of a new company called Otto which plans to build self-driving systems for long haul trucks. The company has been formed by a skilled team, including former members of Google’s car team and people I know well. You can see their opening blog post

My entire focus on this blog, and the focus of most people in this space, has been on cars, particularly cars capable of unmanned operation and door-to-door service. Most of those not working on that have had their focus on highway cars and autopilots. The highway is a much simpler environment so much easier to engineer, but it operates at higher speeds so the cost of accidents is worse.

That goes doubly true for trucks that are fast, big and massive. At the same time, 99% of truck driving is actually very straightforward — stay in a highway lane, usually the slow one, with no fancy moving about.

Some companies have done exploration of truck automation. Daimler/Freightliner has been testing trucks in Nevada. Volvo (trucks and cars together) has done truck and platooning experiments, notably the Sartre project some years ago. A recent group of European researchers did a truck demonstration in the Netherlands, leading up to the Declaration of Amsterdam which got government ministers to declare a plan to modify regulations to make self-driving systems legal in Europe. Local company Peloton has gone after the more tractable problem of two-truck platoons with a driver in each truck, aimed primarily at fuel savings and some safety increases.

Safety

While trucks are big and thus riskier to automate, they are also risky for humans to drive. Even though truck drivers are professionals who drive all day, there are still around 4,000 killed every year in the USA in truck accidents. More than half of those are truck drivers, but a large number of ordinary road users are also killed. Done well, self-driving trucks will reduce this toll. Just as with cars, companies will not release the systems until they believe they can match and beat the safety record of human drivers.

The Economics

Self-driving trucks don’t change the way we move, but they will have a big economic effect on trucking. Driver pay accounts for about 25-35% of the cost of truck operation, but in fact early self-driving won’t take away jobs because there is a serious shortage of truck drivers in the market — companies can’t hire enough of them at the wages they currently pay. It is claimed that there are 50,000 job openings unfilled at the present time. Truck driving is grueling work, sometimes mind-numbing, and it takes the long haul driver away from home and family for over a week on every long-haul run. It’s not very exciting work, and it involves long days (11 hours is the legal limit) and a lot of eating and sleeping in truck stops or the cabin of the truck.

Average pay is about 36 cents/mile for a solo trucker on a common route. Alternately, loads that need to move fast are driven by a team of two. They split 50 cents/mile between them, but can drive 22 hours/day — one driver sleeps in the back while the first one takes the wheel. You make less per mile per driver, but you are also paid for the miles you are sleeping or relaxing.

A likely first course is trucks that keep their solo driver who drives up to 11 hours — probably less — and have the software drive the rest. Nonstop team driving speed with just one person. Indeed, that person might be an owner-operator who is paying for the system as a businessperson, rather than a person losing a job to automation. The human would drive the more complex parts of the route (including heavy traffic) while the system can easily handle the long nights and sparse heartland interstate roads.

The economics get interesting when you can do things that are expensive for human drivers and teams. Aside from operating 22 or more hours/day at a lower cost, certain routes will become practical that were not economic with human drivers, opening up new routes and business models.

The Environment

Computer driven trucks will drive more regularly than humans, effectively driving in “hypermile” style as much as they can. That should save fuel. In addition, while I would not do it at first, the platooning experimented with by Peloton and Sartre does result in fuel savings. Also interesting is the ability to convert trucks to natural gas, which is domestic and burns cleaner (though it still emits CO2.) Automated trucks on fixed routes might be more willing to make this conversion.

Road wear

There is strong potential to reduce the damage to roads (and thus the cost of maintaining them, which is immense and seriously in arrears) thanks to the robotruck. That’s because heavy trucks and big buses cause almost all the road wear today. A surprising rule of thumb is that road damage goes up with the 4th power of the weight per axle. As such an 80,000lb truck with 34,000lb on two sets of 2 axles and 6,000lb on the front axle does around 2,000 times the road damage of a typical car!  read more »

I was investigated by the feds for taking a picture of the sun

A week ago, a rather strange event took place. No, I’m not talking about just the Transit of Mercury in front of the sun on May 9, but an odd result of it.

That morning I was staying at the Westin Waterfront in Boston. I like astrophotography, and have shot several transits. I am particularly proud of my gallery of the 2004 Transit of Venus which is unusual because I shot it in a hazy sunrise where it was a naked eye event, so I have photos of the sun with a lake and birds. Indeed, since the prior transit of Venus was in 1882, we may have been among the first alive to deliberately see it as a naked eye event.

I did not have my top lenses with me but I decided to photograph it anyway with my small size Sony 210mm zoom and a welding glass I brought along. I shot the transit, holding the welding glass over the lens, with all mounted on my super-light “3 legged thing” portable tripod. Not wanting to leave the lens pointed at the sun when I removed the glass, I pulled the drape shut, looked at photos and then tilted the camera away. I went off to my meetings in Boston.

At 10am I got a frantic call from the organizer of the Exponential Manufacturing conference I would be speaking at the next day. “You need to talk to the FBI!” he declared. Did they want my advice on privacy and security? “No,” he said, “They saw you taking photos of the federal building with a tripod from your hotel window and want to talk to you.” (Note: It probably wasn’t the FBI, that was just a first impression. The detectives would not name who had reported it.)

Of course, I had no idea there was any federal building out the window and I did not take any photos of the buildings. In fact, I’m not quite sure what the federal facility is, though I presume it’s at the Barnes Building at 495 Summer St. — they never told me. Anybody know what’s there? Google maps shows a credit union and a military recruiting office, and there was suggestion of a Navy facility. Amusingly the web page for the recruiting center features a (small) photo of the building.

Nothing to justify them having a surveillance crew constantly looking into the hotel rooms of guests and going nuts when they see a camera on a mini-tripod.

I talked to hotel security. Turns out they had gone into my room! Sadly, though police can’t enter your room without a warrant, hotel staff usually can. Two Boston detectives were put on the case. After talking to hotel security, I thought it was over, but no, the next day after my talk, I had the detectives waiting for me in the hotel.

First of all, I was concerned the hotel had given them my name. The hotel insisted the Boston innkeeper statutes require they do this. In reality, such statutes were found facially unconstitutional last year by the Supreme Court in City of Los Angeles v. Patel. In a facial challenge, the law is declared inherently invalid regardless of the specific facts of a case. The Boston police don’t believe this ruling applies to their law yet. So now my name is in police records over photographing the sun. Yes, when they met me, they realized I was just an astro-nerd and not a terrorist casing out the sun for an attack. (General conclusion, it’s too bright, so do it at night.)

To scare me, and to justify their actions, they said the unnamed complainers (probably not FBI) had been “unsure if it was just a camera” (ie. pretending it might be a gun) even though it looks nothing like it. And when I closed the drape — they were watching me live — they imagined it was because I had seen them and was hiding.

Mostly I laugh but the other part of me asks, “what the hell has gone wrong with this country?” Feds peering into our hotel rooms? Being afraid of a cheap lens (on an expensive camera, admittedly) on an ultralight tripod? Getting a police record for taking a photo out your hotel window, not even of the nondescript building that I would have no idea is a federal building? Having to demonstrate to not one, but two detectives that you’re just a harmless nerd? Not good. (They did Google me but did not clue in that I was on the board of the organization suing the NSA and other intelligence groups over the illegal mass wiretapping going on.)

Above you will find my evil picture of the sun — not that bad for a $150 lens, actually — and a picture of my room when I returned to it, with the camera pointing up and into the room. Yes, I took a picture of the buildings after all this, though I did not take one in the morning. That’s Mercury in the lower left corner of the solar disk. The dark area in the middle is a sunspot, another good location for an attack.

Welcome to the new America. And of course I need to add “don’t search my room or give my name to police without contacting me” to my list of things a good hotel should do.

(BTW, I see many duplicate comments pointing to the story of the Economics professor pulled from a plane for doing some diffEQs on paper in the plane seat on his way to a conference. I think the whole nerd world saw that story already.)

What should be in every hotel or AirBNB?

My recent efforts in consulting and speaking have led to a lot more travel — which is great sometimes, but also often a drain. I’ve been staying in so many hotels that I thought it worth enumerating some of the things I think every hotel room should have, and what I often find missing.

Most of these things are fairly inexpensive to do, though a few have higher costs. The cheaper ones I would hope can be just included, I realize some might incur extra charges or a slightly more expensive room, or perhaps they can be offered as a perk to loyalty program members.

Desk space for all occupants

Most rooms usually only have a workspace for one, even if it’s a double room. The modern couple both have computers, and both need a place to work, ideally not crammed together. That’s also true when two co-workers share a room. And in a perfect room, both desk spaces share the other attributes of a good desk, namely:

  • The surface is not glass. I would say more than half the desks in hotel rooms are glass, which don’t work well with optical mice. Sure, you put down some papers, but this seems kinda silly.
  • Of course, 2 or even 3 power outlets, on the desk or wall above it. Ideally the “universal” kind that accept most of the world’s plugs. (Sure, I bring adapters but this is always handy.) Don’t make me crawl under the desk to plug things in, have to unplug something else.

To my horror, Marriott has been building some new hotels with no desk space at all. Some person (I would say some idiot) decided that since millennials use fewer laptops and just want to sit on a couch with their tablet, it was better to sacrifice the desk. Those hotels had better have folding desks you can borrow, in fact all hotels could do that to fix the desk space shortage, particularly if rooms are small. Another option would be a leaf that folds down from the wall.

Surfaces/racks for luggage and other things for everybody.

Many rooms are very lacking in table or surface space beyond the desk. Almost every hotel room comes with only one luggage holder, where a couple might find themselves with 3 or in rare case 4 bags. I doubt these folding luggage holders are that expensive, but if you can’t put more than one in every room, then watch people as they check in, and note how many bags they have, and have somebody automatically send up some extra holders to their room. At the very least make it easy for them to ask. I mean these things are under $30 quantity one. Get more!

Bathrooms need surface space, too. Too often I’ve seen sinks with nowhere to put your toiletries and freedom bag. In fact, I want space everywhere to unpack the things I want to access.

Power by the bed (and other places)

Sure, I get that older hotel rooms did not load up with power outlets, and modern ones do. But aside from the desk, most people want power by the bed now, for their phone charger if nothing else. If you just have one plug by the bed, put a 3-way splitter (global plug, of course) on that plug so that people can plug things in without unplugging the light or clock. And ideally up high, so I don’t have to crawl behind things to get at it.

A little more controversial is the idea of offering USB charging power. Today, we all carry chargers, but the hope is that if charging becomes commonplace, then like the travel hair dryer people used to carry and no longer do, we might be able to depend on finding a charger. Problem is, charging standards are many and change frequently — we now have USB regular (useless) and fast-charge, along with Qualcomm quick-charge and USB C. More will come. On top of this, strictly you should not plug your device into a random USB port which might try to take it over. You can get what’s called a “USB Condom” to block the data lines, but those might interfere with the negotiation phase of smarter power standards. A wireless “Qi” charging plate could be a useful thing.

As a couple, we have had up to 8 things charging at the same time, when you include phones, cameras, external batteries, headphones, tablets and other devices. So I bring a 5-way USB fast charger and rely on laptops or other chargers to go the distance.

Let me access the HDTV as a monitor, or give me a monitor.

Some rooms block you from any access to the TV. Some have a VGA or HDMI port built into a console on the desk. The latter is great, but usually the TV is mounted in a way that makes it not very useful as a computer monitor for working. It’s primarily useful for watching video. I pretty much never watch video in a hotel room, so given the choice, I would put the monitor by the desk, and it should be 1080p or better — in fact 4K should be the norm for any new installations. If you don’t have one, have one I can call down for, even at a modest fee.  read more »

Did a Tesla self-crash in self-park mode?

A recent news story from Utah describes a Tesla which entered self-park (“summon”) mode and drive itself into the back of a flatbed truck raises some interesting issues.

Tesla says that the owner of the vehicle initiated auto-Summon, which requires pressing the gear selector stalk twice and then shifting into park, then leaving the vehicle. After that the car goes into its self-park mode in 3 seconds, and the driver is supposed to be watching because the feature is a beta.

The owner says he never activated the self-park, and if somehow he did by accident, he was standing by the car for 20 seconds showing it off to a stranger, and as such he claims he is absolutely certain the car did not begin moving 3 seconds after he got out. Tesla says the logs say otherwise.

Generally, one believes log files over human memory, though these stories are surprisingly at odds. When doing Summon, the Tesla is flashing its hazard lights and moving, so it’s not exactly subtle. And it’s not supposed to work unless the keyfob is close to the car. No doubt there will be back and forth on just what happened.

However, there are some things that are less disputed:

  1. Unless the owner is out and out lying, there is a problem which allowed an owner to activate the auto-summon feature by accident, and to do so when not close to the car. (When you activate it the hazards start blinking and it shows auto-park on the screen.)
  2. The car should not have hit the metal bars on the back of the flatbed. However, Tesla warns that the feature may not detect thin objects or hanging objects. These bars are quite low, but are sticking off the end of the truck by a large amount. Clearly the obstacle detection is indeed very “beta” if it could not see these. Apparently auto-park is done using the ultrasonic sensors, not the camera. Bumper based ultrasound is not enough.

This also adds some fuel to the ongoing debate about maps. The car was in a place where there would be no reason to initiate Tesla’s self-park, which is designed for it to drive straight into narrow parking spaces. In this case, it is not necessary to have a map of all the spaces a car might self-park, but even a fairly coarse and inaccurate map could allow the car to say, “This seems like an odd place to use the self-park feature, are you sure?” And pretty much all parallel parking spaces on the side of the road qualify as a place you would not use this particular self-park function.

So is the owner lying? Was he playing with auto-summon and screwed up? (You have to screw up royally as it drives quite slowly and any touch on the door handles or the fob will stop it.) The problem is that he claims that the car did it while he was not present, which is not supposed to happen, and if he was present, why did he not stop it?

Google develops a Chrysler minivan

If you had asked me recently what big car company was the furthest behind when it came to robocars, one likely answer would be Fiat-Chrysler. In fact, famously, Chrysler ran ads several years ago during the superbowl making fun of self-driving cars and Google in particular:

Now Google has announced a minor partnership with Chyrsler where they will be getting Chrysler to build 100 custom versions of their hybrid minivans for Google’s experiments. Minivans are a good choice for taxis, with their spacious seating and electric sliding doors — if you want a vehicle to come pick you up, it probably should have something like this.

This is a pretty minor partnership, something closer to a purchase order than a partnership, but it will be touted as a great deal more. My own feeling is it’s unlikely a major automaker will truly partner with a big non-auto player like Google, Uber, Baidu or Apple. Everybody is very concerned about who will own the customer and the brand, and who will be the “Foxconn” and the big tech companies have no great reason to yield on that (because they are big) and the big car companies are unlikely to yield, either. Instead, they will acquire or do deals they control with smaller companies (like the purchase of Cruise or the partnership with Lyft from GM.)

Still, what may change this is an automaker (like FCA) getting desperate. GM got desperate and spent billions. FCA may do the same. Other companies with little underway (like Honda, Peugeot, Mazda, Subaru, Suzuki) may also panic, or hope that the Tier 1 suppliers (Bosch, Delphi, Conti) will save them.

Google custom designed a car for their 3rd generation prototype, with 2 seats, no controls and and electric NEV power train. This has taught them a lot, but I bet it has also taught them that designing a car from scratch is an expensive proposition before you are ready to make many thousands of them.

The coming nightmare for the car industry

I have often written on the challenge facing existing automakers in the world of robocars. They need to learn to completely switch their way of thinking in a world of mobility on demand, and not all of them will do so. But they face serious challenges even if they are among the lucky ones who fully “get” the robocar revolution, change their DNA and make products to compete with Google and the rest of the non-car companies.

Unfortunately for the car companies, their biggest assets — their brands, their experience, their quality and their car manufacturing capacity — are no longer as valuable as they were.

Their brands are not valuable

Today if you summon a car with a company like Uber, you don’t care about what brand of car it is, as long as it’s decent. Even with the “luxury” variants of Uber, you don’t care which type of luxury car shows up, as long as it meets certain standards. For companies who have most of their value in their nameplate, this is nightmare #1. The taxi service (Uber or otherwise) becomes the brand that is seen and valued by the customer.

When you are buying a car for 5 years at the dealership, you care a lot about the brand, both for what it means, and for what it says about you when you show up driving it. When you buy a car by the ride, you don’t care a lot about the brand, because you are only going to use it for a short time.

Their brands might be tarnished

There will be accidents in Robocars, unfortunately. Those accidents will cost money, but they will also cause problems in public image. The problem is, “Mercedes runs over grandmother” is a headline that will make people less likely to buy any type of Mercedes. As such, Mercedes has plans to market self-driving car service under their Car2Go brand. You may not even know that Car2Go is Daimler, and they might like it that way. “Google car runs over grandmother” is bad news for the Google car project, but is not going to make anybody stop doing web searches with Google. (Except the grandmother…)

The non-car companies don’t have a car brand to tarnish, but they do have famous brands. They can use those brands to attract customers without the same risk. Big car companies have famous brands but may be afraid to use them.

They might just be the contract manufacturer

Companies like Uber, Google, Apple and others don’t plan to manufacture cars. Why would they? There is tons of car manufacturing capacity out there. They can just go to carmakers and say, “here’s a purchase order for 100,000 cars — built to our spec with our logo on them.” It will be very hard to turn down such an order. Still, some companies will be too proud to do this, or too unwilling to sign their own suicide note.

If they don’t accept the order, somebody else will. If nobody in the west does, somebody in China will. China is the world’s #1 car manufacturing country, but the cars are rarely exported to the west. They would love to change that.

A likely model for this is the relationship of Apple and Foxconn. Foxconn makes your iPhone, but many don’t know that. Foxconn makes good money, but Apple makes much more, designing the product and owning the customer. The car companies don’t want to be Foxconn in the world of the future, but the alternative may be to be much smaller.

(BTW, Foxconn has said it is interested in making cars.)

First-rate quality might not be that important

Chinese manufacturers don’t have the quality of the current leaders. But they may not need to. Just as Apple taught Foxconn how to make good iPhones, they might follow the same pattern here. But they don’t need to. That’s because a less reliable robocar is not the same sort of problem an unreliable personal car is. Sure, it should not break down while you are riding in it — but even then the company can quickly send you a replacement to pick you up in just a few minutes. If it breaks down otherwise, it just goes out of service. This costs the fleet manager money, but they saved a lot of money with the lower quality manufacturer. When cars can move on demand to service customers, breakdowns are not the same sort of problem. When your own car breaks down it’s a nightmare, and you will pay a lot to avoid it. For a fleet, it’s just a cost. All cars are down for maintenance some of the time. Cheaper cars will be down more, but if they are cheap enough, it still saves money.

Customer perception of quality is still important. The vehicle must maintain the level of comfort and interior quality the customer has paid for. Safety related failures are of course much less tolerable.

New car designs will be radically different

The robocar of the future will look quite different from the cars of the past. Existing car companies can handle this, but they lose some of the advantage that comes from decades of experience. The future robocars are probably electric and much simpler, with hundreds of parts rather than tens of thousands. It’s a new world and experience with the old may actually be a disadvantage. Only Nissan and Tesla have lots of electric car experience today, though GM is building it. Electric platforms are much simpler and ripe for creativity from new players.

The challenge of robotaxis for the poor

While I’m very excited about the coming robocar world, there are still many unsolved problems. One I’ve been thinking about, particularly with my recent continued thinking on transit, is how to provide robotaxi service to the poor, which is to say people without much money and without credit and reputations.

In particular, we want to avoid situations where taxi fleet operators create major barriers to riding by the poor in the form of higher fees, special burdens, or simply not accepting the poor as customers. If you look at services like Uber today, they don’t let you ride unless you have a credit card, though in some cases prepaid debit cards will work.

Today a taxi (or a bus or Uber style vehicle) has a person in it, primarily to drive, but they perform another role — they constrain the behaviour of the rider or riders. They reduce the probability that somebody might trash the vehicle or harass or be violent to another passenger.

Of course, such things happen quite rarely, but that won’t stop operators from asking, “What do we do when it does happen? How can we stop it or get the person who does it to pay for any damage?” And further they will say, “I need a way to know that in the rare event something goes wrong, you can and will pay for it.” They do this in many similar situations. The problem is not that the poor will be judged dangerous or risky. The problem is that they will be judged less accountable for things that might go wrong. Rich people will throw up in the back of cars or damage them as much as the poor, perhaps more; the difference is there is a way to make them pay for it. So while I use the word poor here, I really mean “those it is hard to hold accountable” because there is a strong connection.

As I have outlined in one of my examinations of privacy a taxi can contain a camera with a physical shutter that is open only between riders. It can do a “before and after” photograph, mostly to spot if you left items behind, but also to spot if you’ve damaged or soiled the vehicle. Then the owner can have the vehicle go for cleaning, and send you the bill.

But they can only send you the bill if they know who you are and have a way to bill you. For the middle class and above, that’s no problem. This is the way things like Uber work — everybody is registered and has a credit card on file. This is not so easy for the poor. Many don’t have credit cards, and more to the point, they can’t show the resources to fix the damage they might do to a car, nor may they have whatever type of reputation is needed so fleet operators will trust them. The actions of a few damn the many.

The middle class don’t even need credit cards. Those of us wishing to retain our privacy could post a bond through a privacy protecting intermediary. The robotaxi company would know me only as “PrivacyProxy 12323423” and I would have an independent relationship with PrivacyProxy Inc. which would accept responsibility for any damage I do to the car, and bill me for it or take money from my bond if I’m truly anonymous.

Options for the poor

Without the proxy, robotaxi operators will want some sort of direct accountability from passengers for any problems they might cause. Even for the middle class, it mostly means being identified, so if damage is found, you can be tracked down and made to pay. The middle class have ability to pay, and credit. The poor don’t, at least many of them don’t.

People with some level of identity (an address, a job) have ways to be accountable. If the damage rises to the level where refusing to fix it is a crime at some level, fear of the justice system might work, but it’s unlikely the police are going to knock on somebody’s door for throwing up in a car.

In the future, I expect just about everybody of all income levels will have smartphones, and plans (though prepaid plans are more common at lower income levels.) One could volunteer to be accountable via the phone plan, losing your phone number if you aren’t. Indeed, it’s going to be hard to summon a car without a phone, though it will also be possible using internet terminals, kiosks and borrowing the phones of others.

More expensive rides

A likely solution, seen already in the car rental industry, is to charge extra for insurance for those who can’t prove accountability another way. Car rental company insurance is grossly overpriced, and I never buy it because I have personal insurance and credit cards to cover such issues. Those who don’t often have to pay this higher price.

It’s still a sad reality to imagine the poor having to pay more for rides than for the rich.

An option to mitigate this might be cars aimed at carrying those who are higher risk. These cars might be a bit more able to withstand wear and tear. Their interiors might be more like bus interiors, easily cleaned and harder to damage, rather than luxury leather which will probably be only for the wealthier. To get one, you might have to wait longer. While a middle-class customer ordering a cheap car might be sent a luxury car because that’s what’s spare at the time, it is less likely an untrusted and poor customer would get that.

Before we go do far, I predict the cost of robotaxi rides will get well below $1/mile, heading down to 30 cents/mile. Even with a 30% surcharge, that’s still cheaper than what we have today, in fact it’s cheaper than a bus ticket in many towns, certainly cheaper than an unsubsidized bus ticket which tends to run $5-$6. Still my hope for robotaxi service is that it makes good transportation more available to everybody, and having it cost more for the poor is a defect.

In addition, as long as damage levels remain low, as a comment points out, perhaps the added cost on every ride would be small enough that you don’t need worry about this for poor or rich. (Though having no cost to doing so does mean more spilled food, drink and sadly, vomit.)

Reputation

Over time, fortunately, poor riders could develop reputations for treating vehicles well. Build enough reputation and you might have access to the same fleet and prices that the middle class do, or at least much cheaper insurance. Cause a problem and you might lose the reputation. It would be possible to build such a reputation anonymously, though I suspect most people and companies would prefer to tie it to identity, erasing privacy. Anonymous reputations in particular can be sold or stolen which presents an issue. One option is to tie the reputation to a photo, but not a name. When you get in the car, it would confirm you match the photo, but would not immediately know your name. (In the future, though, police and database companies will be able to turn the photo into a name easily enough.)

Poor riders would still have to pay more to start, probably, or suffer the other indignities of the lower class ride. However, a poor rider who develops a sterling reservation might be able to get some of that early surcharge back later. (Not if it’s insurance. You can’t get insurance back if you don’t use it, it doesn’t work that way!)

It could also be possible for the poor to get friends to vouch for them and give them some starter reputation.

Unfortunately, poor who squander their reputation (or worse, just ride with friends who trash a car) could find themselves unable to travel except at high cost they can’t afford. It could be like losing your car.

The government

The government will have an interest in making sure the poor are not left out of this mobility revolution. As such, there might be some subsidy program to help people get going, and a safety net for loss of reputation. This of course comes with a cost. Taxes would pay for the insurance to fix cars that are damaged by riders unable to be held accountable.

The alternative, after all, is needing to continue otherwise unprofitable transit services with human drivers just for the sake of these people who can’t get private robocar rides. Transit may continue (though without human drivers) at peak times, but it almost surely vanishes off-peak if not for this.  read more »

How would a robocar handle an oncoming tsunami?

Recently a reddit user posted this short video of an amazingly lucky driver in Japan who was able to turn his car around just in time to escape the torrent of the tsunami.

The question asked was, how would a robocar deal with this? It turns out there are many answers to this question. For this particular question, as you’ll see by the end, the answer is probably “very well.”

Let’s start with the bad news. On its own, built in a world where few thought about tsunamis, there is a good chance the vehicle would not handle it well. The instinct for most developers is to be conservative and cautious when facing an unknown situation. The most cautious thing is to do nothing, to just stop and perhaps ask for help from a person in the car or a remote center. Usually if you don’t understand the situation, doing something is much riskier than doing nothing. Usually — but clearly not here.

This situation might be viewed as similar to something you might expect a car to have programming for — something is approaching fast towards you. Cars will probably have logic to deal with a car coming the wrong way down their lane, and this looks a bit like that. It’s actually stuff coming in both lanes. We can imagine the car might have logic to attempt to retreat in that situation, though this isn’t going to look too much like anything the sensors have seen before. With 3D sensors, though, it will be clear that something huge is coming fast. And with a map of what the road should look like, you will easily tell the wall of water and debris from what you should be seeing.

The best reason the car might handle this however, is the very existence of this video, and the posts about it — including this blog post here. The reason is that the developers of robocars, in order to test them, are busy building simulators. In these simulators they are programming every crazy situation they can think of, even impossible situations, just to see what each revision of the car software will do. They are programming every situation that their cars have encountered on the road. Every situation that caused their software to make an error, or anybody else to make an error.

In other words, if you can think of it after a little bit of thinking, they probably thought of it too. And if it’s in blog posts and famous news stories, they probably heard about it. Flooding and every kind of strange weather ever reported. The details of every accident from every police report that can be turned into a simulation. Earthquakes. Tornadoes. Hurricanes. Alien invasions. Oncoming tanks. If you can think of it without a major effort, and it seems like it could happen, they will put it in. And so every car will indeed be tested. In fact, the developers will probably have fun with the really strange situations which are so rare that they may not have commercial or safety justification, but still are interesting. Scenes from movies. James Bond car chases. You name it.

In this particular case, there is another thing to help with this situation. Tsunamis don’t happen by surprise, not any more. The world, having seen them like this, now has earthquake detection and tsunami warning everywhere robocars are likely to go in the near future. The warnings will be transmitted along the same data stream warning cars about traffic, weather and road conditions. We even have maps of the terrain and can even predict what areas are low and which areas cars should head to in the event of a tsunami warning, and they will take routes designed to avoid risk. With superhuman knowledge, they will not panic and do much better than people at taking the route to high ground, and so they odds of them confronting the wall of water would be very slim, unless there was no choice. The robocar simply would not have been going down that road the way the Japanese driver was.

Now we get to a final special ability of robocars — they will be just as capable in reverse gear as they are going forward, other than due to the speed limitations of reverse gear. So while you reverse timidly, a robocar need not do so. It will be able to pull off the fastest 3 point turn you can imagine if it wants to, or even just escape in reverse. Of course if it needs more speed than reverse offers, it would turn around in the best spot to do so. Stanford has even done a lot of research on drifting, and this will go into simulators too, so cars will probably know how to turn around as fast as a stunt driver if they have to. Electric cars may be able to go as fast in reverse as they can going forward to top it all off. (I should note that not all car designs feature sensors that see the same forward and back, so this may not be true for all vehicles, but all vehicles that can reverse at all need not be timid about it the way people are.)

So for this situation, and anything else we know about, robocars should do a superhuman job. That doesn’t mean there aren’t things nobody ever thought of. But the more videos and stories like this that get recorded, the less and less probable unknown events will be, and thus an unknown event where the software does the wrong thing becomes not impossible, but very low probability.

What is the optimum group vehicle size?

My recent article on a future vision for public transit drew some ire by those who viewed it as anti-transit. Instead, the article broke with transit orthodoxy by suggesting that smaller vehicles (including cars and single person pods) might produce more efficient transit than big vehicles. Transitophiles love big vehicles for reasons beyond their potential efficiency, so it’s a hard sell.

Let’s look at the factors which determine what vehicle size makes the best transit.

Before the robocar future arrives, vehicle size is partly dominated by the need for drivers. Consider a bus route which could have one 40 person bus every 30 minutes or a 20 person bus every 15 minutes. The smaller vehicles have the same capacity, and but they will use a little more energy, a little more road space and cost somewhat more to buy. This leads to the intuition that bigger must be better.

At the same time the smaller vehicles need twice as many drivers. Labour is more than half the operating budget of many transit agencies. Look at the Chicago Transit Authority and you see labour listed as 69% — and much labour is actually in other subcontractor categories — while fuel and electricity are only 7% — the capital costs like vehicles are not even included here. Needing twice the drivers dominates the equation.

Riders of course would have an easy time deciding. They would of course love having vehicles every 15 minutes! Indeed they would be very pleased to get a 7 person van every 5 minutes if they could, the difference would be qualitative, not just quantitative, because when you get to that frequency you start thinking about it more like a car. In addition, the 2 small vehicles do about 1/8th the damage to the road as the one large vehicle.

Dolmu?

Taking the cost of drivers out, what is the optimum size? More to the point, what provides the optimum balance between rider demand (which would love more frequent service in smaller vehicles) and efficiency (which pushes for larger vehicles, up to a point?) In particular, more smaller vehicles does not just have to mean more frequent service on one route, it can also mean more routes. More routes can both mean getting places you could not get to before, and also getting there faster because you don’t need as many transfers.

Here’s where big vehicles are better:

  • When near full, or overfull, they use:
    • Less energy per passenger-mile
    • Less road space per passenger
    • Less vehicle cost (depreciation, maintenance etc.) per passenger
  • Less frequent service forces people to bunch their travel together with others, allowing the advantages above.
  • Fewer stops also forces people to bunch together, to live near transit and to walk more.

Here are some of the advantages of more, smaller vehicles

  • As noted, road damage is roughly as the 4th power of vehicle weight per axle.
  • More frequent and/or ubiquitous service as described above
  • Less likely to be lightly loaded (smaller vehicle is sent when demand is light.)
  • When lightly loaded, much more efficient in all factors than large vehicle
  • While the whole fleet takes more total road space than the large vehicles, each vehicle causes much less obstruction of traffic.
  • Able to use smaller bus-stops and navigate tighter turns and narrower roads.
  • Able to park in smaller spaces including many lots for cars (though still taking as much or slightly more total space.)
  • Stops are sometimes fewer, and take less time (fewer people getting on/off any given vehicle.)
  • Each vehicle is considerably less expensive.

The big trade-off comes because the load varies. The full 40 person bus is an efficiency and cost win over two full 20 person buses (or 10 full 4 person cars) but not as much of a win as you might imagine. But the real question involves the frequent issue of a half-full 40 person bus vs. a full 20 person bus. In this case, the smaller vehicle is quite a bit more efficient. Even worse is the 1/4 full 10 person bus vs. the half full 20 person bus or 3 4-person cars. Here the winner is probably the cars, and this is important, because the average bus in the USA actually has just under 10 people on it.

The ideal situation would be to send out a fleet of 40 or even 60 person buses at the peak of rush hour, and then put those in garages, and send out small buses during the off-peak takes and just cars in the off-off-peak times like the night. Have every vehicle run as close to full as possible and you get your greatest efficiency. This is not an option for a few reasons:

  • To do that with buses, you must lower frequency to keep them full, and riders will reject that
  • Agencies usually can’t afford huge fleets of large vehicles as well as huge fleets of medium vehicles to keep the large vehicles idle for most of the day. They are better off choosing with a loss of efficiency.

In the robocar world, they will be able to call upon a large fleet of small vehicles (cars for 1-4 people) at all times and they won’t need to own them. But the transit companies and agencies still must own these larger (8 to 60) person vehicles.

In some cities, it may be practical to keep a fleet of large vehicles for use only at rush hour. In fact, that’s what some commuter train lines use, and they are the most efficient transportation lines in the USA. The rush-hour-only commuter trains run full out to the suburbs, spend the night in the suburbs and run full back into town. That’s really efficient. The commuter trains with daytime service are not nearly as good. Train lines that can drop cars off-peak get a win here as well.

How practical it is depends on how long you need the big bus to last. Transit vehicles tend to be robust, heavy and expensive, and they are well maintained to maximize their lifetime. A bus that only works rush hour will last more years than one that works all day. The problem is it may last too many years, to the point that it becomes obsolete or wears out from time rather than just miles. Leaving vehicles idle also means tying up capital for longer, so even if you find a good schedule for depreciation of the vehicles, the cost of money makes it difficult to have two or three different fleets.

So in the end, cities have to choose. Because of the labour cost of drivers, they almost always choose the bigger vehicles. Without that cost, the advantages of the smaller vehicles win out because of the variability of load. If the line regularly runs low-load vehicles, it has chosen a size that is larger than optimal.

This is all general analysis. The next step I would like to see from the transportation research community is to build these models with the actual numbers from real transit systems. For each city, for each route, the optimal size will be different. And of course, the existence of the robocars will change demand, which also changes load. They can change demand down (by being a superior solution) or up (by making it easier to get to the shared vehicle.) They can also replace the big vehicles entirely at off-peak times. That sounds like competition, but it actually can be enabling. One reason transit agencies run their big vehicles all day long (erasing their efficiency) is that riders want assurance they can come in at rush hour and then decide to leave early or late. Thus there has to be off-peak service. If riders can be assured that something else (like a robotic taxi or even an Uber) can get them home inexpensively off-peak, they are more willing to take the transit in.

Indeed, it could make sense for transit agencies to say, “we will have low service after 8pm, but if you can show you rode with us in the morning, we will subsidize a private car for you after hours 10 times a month.” They might actually save money by offering this rather than running a mostly empty bus.

comma.ai's neural network car and the hot new technology in robocars

Perhaps the world’s most exciting new technology today are deep neural networks, in particular the convolutional neural networks such as “Deep Learning.” These networks are conquering some of the most well known problems in artificial intelligence and pattern matching, and since their development just a few years ago, milestones in AI have been falling as computer systems that match or surpass human capability have been demonstrated. Playing Go is just the most recent famous example.

This is particularly true in image recognition. Over the past several years, neural network systems have gotten better than humans at problems like recognizing street signs in camera images and even beating radiologists at identifying cancers in medical scans.

These networks are having their effect on robocar development. They are allowing significant progress in the use of vision systems for robotics and driving, making those progress much faster than expected. 2 years ago, I declared that the time when vision systems would be good enough to build a safe robocar without lidar was still fairly far away. That day has not yet arrived, but it is definitely closer, and it’s much harder to say it won’t be soon. At the same time, LIDAR and other sensors are improving and dropping in price. Quanergy (to whom I am an advisor) plans to ship $250 8-line LIDARS this year, and $100 high resolution LIDARS in the next couple of years.

The deep neural networks are a primary tool of MobilEye, the Jerusalem company which makes camera systems and machine-vision ASICs for the ADAS (Advanced Driver Assistance Systems) market. This is the chip used in Tesla’s autopilot, and Tesla claims it has done a great deal of its own custom development, while MobilEye claims the important magic sauce is still mostly them. NVIDIA has made a big push into the robocar market by promoting their high end GPUs as the supercomputing tool cars will need to run these networks well. The two companies disagree, of course, on whether GPUs or ASCICs are the best tool for this — more on that later.

In comes comma.ai

In February, I rode in an experimental car that took this idea to the extreme. The small startup comma.ai, lead by iPhone hacker George Hotz, got some press by building an autopilot similar in capability to many others from car companies in a short amount of time. In January, I wrote an introduction to their approach including how they used quick hacking of the car’s network bus to simplify having the computer control the car. They did it with CNNs, and almost entirely with CNNs. Their car feeds the images from a camera into the network, and out from the network come commands to adjust the steering and speed to keep a car in its lane. As such, there is very little traditional code in the system, just the neural network and a bit of control logic.

Here’s a video of the car taking us for a drive:

The network is built instead by training it. They drive the car around, and the car learns from the humans driving it what to do when it sees things in the field of view. To help in this training, they also give the car a LIDAR which provides an accurate 3D scan of the environment to more absolutely detect the presence of cars and other users of the road. By letting the network know during training that “there is really something there at these coordinates,” the network can learn how to tell the same thing from just the camera images. When it is time to drive, the network does not get the LIDAR data, however it does produce outputs of where it thinks the other cars are, allowing developers to test how well it is seeing things.

This approach is both interesting and frightening. This allows the development of a credible autopilot, but at the same time, the developers have minimal information about how it works, and never can truly understand why it is making the decisions it does. If it makes an error, they will generally not know why it made the error, though they can give it more training data until it no longer makes the error. (They can also replay all other scenarios for which they have recorded data to make sure no new errors are made with the new training data.)  read more »

Everybody should have RAID and a filesystem to manage it

For many years, I have been using RAID for my home storage. With RAID (and its cousins) everything is stored redundantly so that if any disk drive fails, you don’t lose your data, and in fact your system doesn’t even go down. This can come at a cost of anywhere from about 25% to 50% of your disk space (but disk is cheap) and it also often increases disk performance. Some years ago I wrote about how disk drives should be sold in form factors designed for easy RAID in every PC, and I still believe that.

RAID comes with a few costs. One of them is that you need to do too much sysadmin to get it working right. The nastiest cost is there are some edge cases where RAID can cause you to lose all your data where you would not have lost it (or all of it) if you had not used RAID. That’s bad — it should never make things worse.

A few years ago I switched to one of the new filesystems which put the RAID-like functionality right into the filesystem, instead of putting that into a layer underneath. I think that’s the right thing, and in fact, fear of layer violations is generally a mistake here. I am using BTRFS. Others use ZFS and a few other players. BTRFS is new and so its support for RAID-5 (Which only costs 25-33% of your space and is fast) is too young, so I use its RAID-1, where everything is just written twice onto two different disks. Unlike traditional RAID, BTRFS will do RAID-1 on more than 2 drives, and they don’t have to be all of equal size. That’s good, though I ran into some problems with the fairly common operation of increasing the size of my storage by replacing my smallest drive with a much larger one.

The long term goal of such systems should be near-trivial sysadmin. The system should handle all drives and partitions thrown at it in a “just works” way. You give it any amount of drives and it figures out the best thing to do, and adapts as you change. You should only need to tell it a few policies, such as how much need you have for reliability and speed and how much space you are willing to pay for it. The systems should never put you at more risk than you ask for, or more risk than you would have had with having just one drive or a set of non-redundant drives. That’s hard, but it is a worthwhile goal.

But I think we could do more, and we could do it in a way that we get better and better storage with less sysadmin.

Multiple drives, but not too many

I think most users will probably stick to 2 drives, and rarely go above 3. The reality is that 4 or more is for servers and heavy users, because each drive takes power and generates heat. However, adding an SSD to the mix is always a good idea but it’s not for redundancy.

The OS should understand what’s happening and reflect it in the filesystem

The truth is not all files need as much redundancy and speed. The OS can know a lot about that and identify:

  • Files that are accessed frequently vs. ones not accessed much, or for a long time
  • Files that are accessed by interactive applications which cause those applications to be IO bound. (ie. slowed by waiting for the disk.)
  • Files that have been backed up in particular ways, and when.

Your OS should start by storing everything redundantly (RAID 1 or 5) until such time as the disk starts getting close to full. When that happens, it should of course alert you it is time to upgrade your drives or add another. But it can also offer another option which ou can explicitly ask for, namely reduce the redundancy on files which are rarely accessed, have not been used for a while, and have been backed up.

It turns out, that’s often a lot of the files on a disk. In particular, the thing that uses up most of the disk space for the ordinary user is their collection of photos and videos. Other than the few that get regular access, there is no actual need for RAID level redundancy on these images. If their own drive is lost, there is a backup where you can get them. They aren’t needed for regular system operation.

The systems already know what files belong to the OS, and can keep them redundant, though most home users are not looking for 100% uptime, they really only want 100% data safety.

To do this right, programs need to tell the OS why they are accessing files. Your photo organizer possibly scans your photo collection regularly, but this scan doesn’t make the files system crucial. My goal is not to have the users designate these things, though that is one option. Ideally the system should figure it out.

The system can also take the most important files, the ones that cause the system to block, and make sure they are both redundantly stored and found on SSD.

Easier backup

Backup needs to be easy and automatic. When systems boot up, they should offer to do backup for others who are nearby and semi-nearby, and then they should trade backup space. My system should offer space to others, and make use of their space for either general backup (if in the same house/company/LAN) and offsite backup (remote but with good bandwidth.) Of course, ISPs and other providers can also provide this space for money.

The key thing is this should happen with almost no setup by the user. One problem for me is that I can come back from a trip with 50gb of new photos, and they would clog my upstream for remote backup. The system should understand what files have priority, and if the backlog gets too much, request I plug in an external USB drive to offer a backup until the backlog can be cleared. Otherwise I should not have to deal with it. Of course, the backup I offer others does not need RAID redundancy. Instead, I should be queried regularly to prove I still have the backups, and if not, the person I am backing up should seek another place.

Encryption

Of course all remote backup must be encrypted by me. In fact, all disks should be encrypted, but too much desire for security can cause risk of losing all your data. Systems must understand the reduced threat model of the ordinary user and make sure keys are backed up in enough places that the chances of losing them are nil, even if it increases the chance that the NSA might get the keys. This is actually pretty hard. The typical “What was your pet’s name” pseudo security questions are not strong enough, but going stronger makes it more likely there can be key loss. Proposals such as my friendscrow can work if the system knows your social network. They have the advantage that there is zero UI to escrowing the key, and a lot of work to recover it. This is the ideal model because if there is ZUI on storing it, you are sure it will be stored. Nobody minds extra work if they have lost all the normal paths to getting their key.

The future of transit is self-driving medium sized vehicles with no fixed routes or schedules

Most of our focus these days is on self-driving personal cars. In spite of that focus, the effects on mass transit will also be quite dramatic, in ways far beyond taking the driver out of the bus. Indeed, for various reasons, I believe traditional approaches to mass transit (large vehicles on fixed routes and schedules, sometimes with private right-of-way) will be obsoleted by robocar technology, and that the result will be almost 100% good — transportation that is better, faster, more convenient and even more sustainable. (The latter shocks people, who think that anything with small vehicles is inherently less energy efficient.)

I have a new special article on Robocars.com outlining potential visions for the future of transit, and what they might mean. The vision is a work in progress, but I invite debate.

Click for The Future of Mass Transit

The math says we probably make a lot more robocars -- maybe

I frequently see people claim that one effect of robocars is that because we’ll share the cars (when they work as taxis) and most cars stay idle 95% of the time, that a lot fewer cars will be made — which is good news for everybody but the car industry. I did some analysis of why that’s not necessarily true and recent analysis shows the problem to be even more complex than I first laid out.

To summarize, in a world of robotic taxis, just like today’s taxis, they don’t wear out by the year any more, they wear out by the mile (or km.) Taxis in New York last about 5 years and about 250,000 miles, for example. Once cars wear out by the mile, the number of cars you need to build per year is equal to:

Total Vehicle Miles per year
Avg Car Lifetime in Miles

As you can see, the simple equation does not involve how many people share the vehicle at all! As long as the car is used enough that the car isn’t junked before it wears out from miles, nothing changes. It’s never that simple, however, and some new factors come into play. The actual model is very complex with a lot of parameters — we don’t know enough to make a good prediction.

People travel more in cars.

It’s likely that the number of miles people want to travel goes up for a variety of reasons. Robocars make car travel much more pleasant and convenient. Some people might decide to live further from work now that they can work, read, socialize or even sleep on the commute. They might make all sorts of trips more often. Outside of rush hour, they might also be more likely to switch from other modes, such as public transit, and even flying. Consider two places about a 5 hour drive apart — today flying is going to take just under 3 hours due to all the hassles we’ve added to flying, even with the improvements robocars make to those hassles. Many might prefer an uninterrupted car ride where they can work, watch videos or sleep.

Vehicles run empty to reposition

Regular taxis have wasted miles between rides. Indeed, a New York taxi has no passenger 38% of the time. Fortunately, robocars will be a lot more efficient than that, since they don’t need to cruise around looking for rides. Research suggests a more modest 10% “empty mile” cost, but this will vary from situation to situation. If you need the robotaxi fleet to constantly run empty in the reverse commute direction, it could get worse. Among those who believe robocars will be more personally owned than used as taxis, we often see a story painted of how a household has a car that takes one person to work, and returns home empty to take the 2nd person, and then returns again to take others on daytime errands. This is possible, but pretty inefficient. I think it’s far more likely that in the long term, such families will just use other taxi services rather than have their car return home to serve another family member.

Cars last longer

The bottom part of the equation is likely to increase, which reduces the number of cars made. Today, cars are engineered for their expected life-cycle — 19 years and 190,000 miles in California, for example. Once you know your car is going to have a high duty cycle, you change how you engineer it. In particular, you combine engineering of parts for your new desired life cycle with specific replacement schedules for things that will wear out sooner. You want to avoid junking a car with lots of life in the engine just because the seats are worn out, so you make it easy to replace the seats, and you have the car bring itself to a service center where that’s fast and easy.  read more »

GM buys "Cruise" for $1B

General Motors has purchased “Cruise,” a small self-driving startup in San Francisco. Rumours suggest the price was over one billion dollars. In addition, other rumours have come to me suggesting that at least one other startup has been seeking a new round of funding at that valuation, but did not succeed due to the market downturn.

I gave Cruise some small assistance when they were getting started, and wrote about them when they showed off their first prototype. Since then, Cruise, as expected, moved away from highway autopilot retrofit into making a proper robocar, and their test Leaf has been running around SF with 4 velodyne LIDARs and other sensors for a while.

Even in my wildest dreams, I did not imagine startup valuations this high, this soon. (Time to get my own startup going.) Let’s consider why:

First, GM, as the world’s 2nd largest car company, is way behind on robocars. They were one of the first companies to announce a highway autopilot (called, ironically, “Super Cruise”) for the 2014 Cadillac. However, they quickly pulled back on that announced, and for the last few years have continued to delay it, recently announcing it would not even appear in the 2017 car, even though Mercedes, Tesla and several other companies had products like that.

GM’s main academic partner was CMU. They sponsored Boss, the CMU team that won the Darpa Urban Challenge, headed by Chris Urmson (who now leads the Google car project.) Recently, Uber moved into Pittsburgh in a big way and poached many of the top people from CMU for their project. This left GM with very little, a poor position for the world’s 2nd largest car company.

Next, we have Kyle Vogt, founder of Cruise. Kyle was on the founding team for justin.tv, and also for Twitch, which had a billion dollar acquisition — in other words, Kyle is not precisely hurting for money. He has not confirmed this to me, but I suspect when GM showed up at his door, he was not interested in joining a big car company, and his resources meant he was not in any hurry. I then presume GM took that as negotiation and bumped the price to where you would have to be crazy to say no.

GM will let cruise be independent, at least for now. That’s the only sane path. We’ll see where this goes.

Bloomberg (or another moderate) could have walked away with the Presidency due to Trump

Michael Bloomberg, a contender for an independent run for US President has announced he will not run though for a reason that just might be completely wrong. As a famous moderate (having been in both the Republican and Democratic parties) he might just have had a very rare shot at being the first independent to win since forever.

Here’s why, and what would have to happen:

  1. Donald Trump would have to win the Republican nomination. (I suspect he won’t, but it’s certainly possible.)
  2. The independent would have to win enough electoral votes to prevent either the Republican or Democrat getting 280.

If nobody has a majority of the electoral college, the house picks the President from the top 3 college winners. The house is Republican, so it seems pretty unlikely it would pick any likely Democratic Party nominee, and the Democrats would know this. Once they did know this, the Democrats would have little choice but to vote for the moderate, since they certainly would not vote for Trump.

Now all it takes is a fairly small number of Republicans to bolt from Trump. Normally they would not betray their own party’s official nominee, but in this case, the party establishment hates Trump, and I think that some of them would take the opportunity to knock him out, and vote for the moderate. If 30 or more join the democrats and vote for the moderate, he or she becomes President.

It would be different for the Vice President, chosen by the senate. Trump probably picks a mainstream republican to mollify the party establishment, and that person wins the senate vote easily.

To be clear, here the independent can win even if all they do is make a small showing, just strong enough to split off some electors from both other candidates. Winning one big state could be enough, for example, if it was won from the candidate who would otherwise have won.  read more »

Google's crash is a very positive sign

Reports released reveal that one of Google’s Gen-2 vehicles (the Lexus) has a fender-bender (with a bus) with some responsibility assigned to the system. This is the first crash of this type — all other impacts have been reported as fairly clearly the fault of the other driver.

This crash ties into an upcoming article I will be writing about driving in places where everybody violates the rules. I just landed from a trip to India, which is one of the strongest examples of this sort of road system, far more chaotic than California, but it got me thinking a bit more about the problems.

Google is thinking about them too. Google reports it just recently started experimenting with new behaviours, in this case when making a right turn on a red light off a major street where the right lane is extra wide. In that situation it has become common behaviour for cars to effectively create two lanes out of one, with a straight through group on the left, and right turners hugging the curb. The vehicle code would have there be only one lane, and the first person not turning would block everybody turning right, who would find it quite annoying. (In India, the lane markers are barely suggestions, and drivers — which consist of every width of vehicle you can imagine) — dynamically form their own patterns as needed.)

As such, Google wanted their car to be a good citizen and hug the right curb when doing a right turn. So they did, but found the way blocked by sandbags on a storm drain. So they had to “merge” back with the traffic in the left side of the lane. They did this when a bus was coming up on the left, and they made the assumption, as many would make, that the bus would yield and slow a bit to let them in. The bus did not, and the Google car hit it, but at very low speed. The Google car could have probably solved this with faster reflexes and a better read of the bus’ intent, and probably will in time, but more interesting is the question of what you expect of other drivers. The law doesn’t imagine this split lane or this “merge.” and of course the law doesn’t require people to slow down to let you in.

But driving in so many cities requires constantly expecting the other guy to slow down and let you in. (In places like Indonesia, the rules actually give the right-of-way to the guy who cuts you off, because you can see him and he can’t easily see you, so it’s your job to slow. Of course, robocars see in 360 degrees, so no car has a better view of the situation.)

While some people like to imagine that important ethical questions for robocars revolve around choosing who to kill in an accident, that’s actually an extremely rare event. The real ethical issues revolve around this issue of how to drive when driving involves routinely breaking the law — not once in a 100 lifetimes, but once every minute. Or once every second, as is the case in India. To solve this problem, we must come up with a resolution, and we must eventually get the law to accept it the same what it accepts it for all the humans out there, who are almost never ticketed for these infractions.

So why is this a good thing? Because Google is starting to work on problems like these, and you need to solve these problems to drive even in orderly places like California. And yes, you are going to have some mistakes, and some dings, on the way there, and that’s a good thing, not a bad thing. Mistakes in negotiating who yields to who are very unlikely to involve injury, as long as you don’t involve things smaller than cars (such as pedestrians.) Robocars will need to not always yield in a game of chicken or they can’t survive on the roads.

In this case, Google says it learned that big vehicles are much less likely to yield. In addition, it sounds like the vehicle’s confusion over the sandbags probably made the bus driver decide the vehicle was stuck. It’s still unclear to me why the car wasn’t able to abort its merge when it saw the bus was not going to yield, since the description has the car sideswiping the bus, not the other way around.

Nobody wants accidents — and some will play this accident as more than it is — but neither do we want so much caution that we never learn these lessons.

It’s also a good reminder that even Google, though it is the clear leader in the space, still has lots of work to do. A lot of people I talk to imagine that the tech problems have all been solved and all that’s left is getting legal and public acceptance. There is great progress being made, but nobody should expect these cars to be perfect today. That’s why they run with safety drivers, and did even before the law demanded it. This time the safety driver also decided the bus would yield and so let the car try its merge. But expect more of this as time goes forward. Their current record is not as good as a human, though I would be curious what the accident rate is for student drivers overseen by a driving instructor, which is roughly parallel to the safety driver approach. This is Google’s first caused accident in around 1.5M miles.

It’s worth noting that sometimes humans solve this problem by making eye contact, to know if the other car has seen you. Turns out that robots can do that as well, because the human eye flashes brightly in the red and infrared when looking directly at you — the “red eye” effect of small flash cameras. And there are ways that cars could signal to other drivers, “I see you too” but in reality any robocar should always be seeing all other parties on the road, and this would just be a comfort signal. A little harder to read would be gestures which show intent, like nodding, or waving. These can be seen, though not as easily with LIDAR. It’s better not to need them.

Uber, Lyft and crew should replace public transit at night

I have a big article forthcoming on the future of public transit. I believe that with the robocar (and van) it moves from being scheduled, route-based mass transit to on-demand, ad-hoc route medium and small vehicle transit. That’s in part because of the disturbingly poor economics of current mass transit, especially in the USA. We can do much better.

However, long before that day, there is something else that could be done. Many mass transit systems shut down at night. Demand is low, and that creates a big burden for the “night people” of the world, who are left with taxis and occasional carpooling, or more limited night bus service.

I think transit agencies should make a deal with companies like Uber to operate their carpool services (UberPool and LyftLines) during transit closure hours, and subsidize the rides to bring them down equal to, or closer to a transit ticket. This could also be the case for other seriously off-peak times, like weekends and holidays.

Already the typical transit ticket in the USA is heavily subsidized. The real cost of providing a transit ride is much higher. In the transit-heavy cities, fares pay about 50-60% of operating cost, but in some cities it’s only 15-20%. The US national average is around 33%. And that’s just operating cost, it does not include the capital costs in many cases. One thing that pushes the number the wrong way is operation during off-peak hours on lightly loaded vehicles. So while the average ride may cost $6 to provide, it can be more at night. Already the mobile-summoned based carpools are close to that price. (For promotions, they have actually gotten to less. They also subsidize to get going, though.)

There are some big issues. First, not everybody has a smartphone, a data plan or even a phone. You need a method for those without them to summon a ride. You could start with an 800 number so any phone (or the few remaining payphones) could summon a ride. You could also make mini-kiosks by building a protective case and putting a surplus tablet at every subway stop and many bus stops.

Another issue is that these services, particularly the carpool versions, depend on not having anonymous riders. People feel much safer about carpooling with strangers if those strangers can be identified if there is a problem. Transit riding is anonymous, and should be. The solutions to this are challenging. On top of all this, riding in a mobile-hail car is never paid for with cash, and the drivers are not going to accept cash. At the least, this means you would need to provide tickets that people buy (from machines at stations or in advance) which the driver can scan with their phone. So no just deciding to take a ride with cash. Transit cards are an other issue, though there is no requirement that they work, because at least at first, this service is meant for hours when the transit was not even running, so it’s OK if it’s an extra cost.

Dolmu? Finally, there is the issue that this is too good. A ride in a private car vs. a late night transit bus, for the price of a bus? People will over-use it, and that would of course get the Taxis angry, though there is no reason they could not participate as they are all going to supporting mobile-app hail. But the subsidy may be too expensive if people over use it.

One solution to that is to only allow it to take you between transit stops. Even that’s “too good” in that it may be faster than the transit, and much faster if the trip involved changes, especially changes during limited service times. You could get extreme and only allow it between limited sets of stops, or require 2 rides (for the same price) to simulate having to change lines. This also makes carpooling much easier, as the drivers would mostly end up cruising close to the transit lines. IF they do it in vans it could be quite efficient, in fact.

We probably don’t need to go that far in limiting it, but we could. You could tune the ease and quality of the service so the demand is what you expect, and the subsidy affordable. And the ride companies could actually use this as a way to gain extra revenue. They could offer you a door to door ride with a subsidy for the portion that would have been along the transit line. For example, today you can take Uber to the subway station, ride the subway for $2 and then take Uber from the end station to your destination, and that can be cheaper than just taking the Uber directly. This ride could be offered at some subsidized price and keep up the volume. The taxi companies can either get into the 21st century and play, or not compete.

Aside from improving transit service (by making it 24 hours) this also lets us experiment with the future world of ad-hoc demand based public transportation, when we get to the future where the vans are driving themselves. More on that to come.

Fears confirmed on failure of fix to Hugo awards

Last year, I wrote a few posts on the attack on Science Fiction’s Hugo awards, concluding in the end that only human defence can counter human attack. A large fraction of the SF community felt that one could design an algorithm to reduce the effect of collusion, which in 2015 dominated the nomination system. (It probably will dominate it again in 2016.) The system proposed, known as “e Pluribus Hugo” attempted to defeat collusion (or “slates”) by giving each nomination entry less weight when a nomination ballot was doing very well and getting several of its choices onto the final ballot. More details can be found on the blog where the proposal was worked out.

The process passed the first round of approval, but does not come into effect unless it is ratified at the 2016 meeting and then it applies to the 2017 nominations. As such, the 2016 awards will be as vulnerable to the slates as before, however, there are vastly more slate nominators this year — presuming all those who joined in last year to support the slates continue to do so.

Recently, my colleague Bruce Schneier was given the opportunity to run the new system on the nomination data from 2015. The final results of that test are not yet published, but a summary was reported today in File 770 and the results are very poor. This is, sadly, what I predicted when I did my own modelling. In my models, I considered some simple strategies a clever slate might apply, but it turns out that these strategies may have been naturally present in the 2015 nominations, and as predicted, the “EPH” system only marginally improved the results. The slates still massively dominated the final ballots, though they no longer swept all 5 slots. I consider the slates taking 3 or 4 slots, with only 1 or 2 non-slate nominees making the cut to be a failure almost as bad as the sweeps that did happen. In fact, I consider even nomination through collusion to be a failure, though there are obviously degrees of failure. As I predicted, a slate of the size seen in the final Hugo results of 2015 should be able to obtain between 3 and 4 of the 5 slots in most cases. The new test suggests they could do this even with a much smaller slate group as they had in the 2015 nominations.

Another proposal — that there be only 4 nominations on each nominating ballot but 6 nominees on the final ballot — improves this. If the slates can take only 3, then this means 3 non-slate nominees probably make the ballot.

An alternative - Make Room, Make Room!

First, let me say I am not a fan of algorithmic fixes to this problem. Changing the rules — which takes 2 years — can only “fight the last war.” You can create a defence against slates, but it may not work against modifications of the slate approach, or other attacks not yet invented.

Nonetheless, it is possible to improve the algorithmic approach to attain the real goal, which is to restore the award as closely as possible to what it was when people nominated independently. To allow the voters to see the top 5 “natural” nominees, and award the best one the Hugo award, if it is worth.

The approach is as follows: When slate voting is present, automatically increase the number of nominees so that 5 non-slate candidates are also on the ballot along with the slates.

To do this, you need a formula which estimates if a winning candidate is probably present due to slate voting. The formula does not have to be simple, and it is OK if it occasionally identifies a non-slate candidate as being from a slate.

  1. Calculate the top 5 nominees by the traditional “approval” style ballot.
  2. If 2 or more pass the “slate test” which tries to measure if they appear disproportionately together on too many ballots, then increase the number of nominees until 5 entries do not meet the slate condition.

As a result, if there is a slate of 5, you may see the total pool of nominees increased to 10. If there are no slates, there would be only 5 nominees. (Ties for last place, as always, could increase the number slightly.)

Let’s consider the advantages of this approach:

  • While ideally it’s simple, the slate test formula does not need to be understood by the typical voter or nominator. All they need to know is that the nominees listed are the top nominees.
  • Likewise, there is no strategy in nominating. Your ballot is not reduced in strength if it has multiple winners. It’s pure approval.
  • If a candidate is falsely identified as passing the slate test — for example a lot of Doctor Who fans all nominate the same episodes — the worst thing that happens is we get a few extra nominees we should not have gotten. Not ideal, but pretty tame as a failure mode.
  • Likewise, for those promoting slates, they can’t claim their nominations are denied to them by a cabal or conspiracy.
  • All the nominees who would have been nominated in the absence of slate efforts get nominated; nobody’s work is displaced.
  • Fans can decide for themselves how they want to consider the larger pool of nominees. Based on 2015’s final results (with many “No Awards”) it appears fans wish to judge some works as there unfairly and discount them. Fans who wish it would have the option of deciding for themselves which nominees are important, and acting as though those are all that was on the ballot.
  • If it is effective, it gives the slates so little that many of them are likely to just give up. It will be much harder to convince large numbers of supporters to spend money to become members of conventions just so a few writers can get ignored Hugo nominations with asterisks beside them.

It has a few downsides, and a vulnerability.

  • The increase in the number of nominees (only while under slate attack) will frustrate some, particularly those who feel a duty to read all works before voting.
  • All the slate candidates get on the ballot, along with all the natural ones. The first is annoying, but it’s hardly a downside compared to having some of the natural ones not make it. A variant could block any work that fits the slate test but scored below 5th, but that introduces a slight (and probably un-needed) bit of bias.
  • You need a bigger area for nominees at the ceremony, and a bigger party, if they want to show up and be sneered at. The meaning of “Hugo Nominee” is diminished (but not as much as it’s been diminished by recent events.)
  • As an algorithmic approach it is still vulnerable to some attacks (one detailed below) as well as new attacks not yet thought of.
  • In particular, if slates are fully coordinated and can distribute their strength, it is necessary to combine this with an EPH style algorithm or they can put 10 or more slate candidates on the ballot.

All algorithmic approaches are vulnerable to a difficult but possible attack by slates. If the slate knows its strength and knows the likely range of the top “natural” nominees, it can in theory choose a number of slots it can safety win, and name only that many choices, and divide them up among supporters. Instead of having 240 people cast ballots with the 3 choices, they can have 3 groups of 80 cast ballots for one choice only. No simple algorithm can detect that or respond to it, including this one. This is a more difficult attack than the current slates can carry off, as they are not that unified. However, if you raise the bar, they may rise to it as well.

All algorithmic approaches are also vulnerable to a less ambitious colluding group, that simply wants to get one work on the ballot by acting together. That can be done with a small group, and no algorithm can stop it. This displaces a natural candidate and wins a nomination, but probably not the award. Scientologists were accused of doing this for L. Ron Hubbard’s work in the past.

What formula?

The best way to work out the formula would be through study of real data with and without slates. One candidate would be to take all nominees present on more than 5% of ballots, and pairwise compare them to find out what fraction of the time the pair are found together on ballots. Then detect pairs which are together a great deal more than that. How much more would be learned from analysis of real data. Of course, the slates will know the formula, so it must be difficult to defeat it even knowing it. As noted, false positives are not a serious problem if they are uncommon. False negatives are worse, but still better than alternatives.

So what else?

At the core is the idea of providing voters with information on who the natural nominees would have been, and allowing them to use the STV voting system of the final ballot to enact their will. This was done in 2015, but simply to give No Award in many of the categories — it was necessary to destroy the award in order to save it.

As such, I believe there is a reason why every other system (including the WSFS site selection) uses a democratic process, such as write-in, to deal with problems in nominations. Democratic approaches use human judgment, and as such they are not a response to slates, but to any attack.

As such, I believe a better system is to publish a longer list of nominees — 10 or more — but to publish them sorted according to how many nominations they got. This allows voters to decide what they think the “real top 5” was and to vote on that if they desire. Because a slate can’t act in secret, this is robust against slates and even against the “slate of one” described above. Revealing the sort order is a slight compromise, but a far lesser one than accepting that most natural nominees are pushed off the ballot.

The advantages of this approach:

  • It is not simply a defence against slates, it is a defence against any effort to corrupt the nominations, as long as it is detected and fans believe it.
  • It requires no algorithms or judgment by officials. It is entirely democratic.
  • It is completely fair to all comers, even the slate members.

The downsides are:

  • As above, there are a lot more nominees, so the meaning of being a nominee changes
  • Some fans will feel bound to read/examine more than 5 nominees, which produces extra work on their part
  • The extra information (sorting order) was never revealed before, and may have subtle effects on voting strategy. So far, this appears to be pretty minor, but it’s untested. With STV voting, there is about as little strategy as can be. Some voters might be very slightly more likely to rank a work that sorted low in first place, to bump its chances, but really, they should not do that unless they truly want it to win — in which case it is always right to rank it first.
  • It may need to add EPH style counting if slates get a high level of coordination.

Human judgment

Another surprisingly strong approach would be simply to add a rule saying, “The Hugo Administrators should increase the number of nominees in any category if their considered analysis leaves them convinced that some nominees made the final ballot through means other than the nominations of fans acting independently, adding one slot for each work judged to fail that test, but adding no more than 6 slots.” This has tended to be less popular, in spite of its simplicity and flexibility - it even deals with single-candidate campaigns — because some fans have an intense aversion to any use of human judgment by the Hugo administrators.

Advantages:

  • Very simple (for voters at least)
  • Very robust against any attempt to corrupt the nominations that the admins can detect. So robust that it makes it not worth trying to corrupt the nominations, since that often costs money.
  • Does not require constant changes to the WSFS constitution to adapt to new strategies, nor give new strategies a 2 year “free shot” before the rules change.
  • If administrators act incorrectly, the worst they do is just briefly increase the number of nominees in some categories.
  • If there are no people trying to corrupt the system in a way admins can see, we get the original system we had before, in all its glory and flaws.
  • The admins get access to data which can’t be released to the public to make their evaluations, so they can be smarter about it.

Disadvantages:

  • Clearly a burden for the administrators to do a good job and act fairly
  • People will criticise and second guess. It may be a good idea to have a post-event release of any methodology so people learn what to do and not do.
  • There is the risk of admins acting improperly. This is already present of course, but traditionally they have wanted to exercise very little judgment.

Will bed-bound seniors experience the world through VR telepresence robots?

I’ve written before about my experiences inhabiting a telepresence robot. I did it again this weekend to attend a reunion, with a different robot that’s still in prototype form.

I’ve become interested in the merger of virtual reality and telepresence. The goal would be to have VR headsets and telepresence robots able to transmit video to fill them. That’s a tall order. On the robot you would have a array of cameras able to produce a wide field view — perhaps an entire hemisphere, or of course the full sphere. You want it in high resolution, so this is actually a lot of camera.

The lowest bandwidth approach would be to send just the field of view of the VR glasses in high resolution, or just a small amount more. You would send the rest of the hemisphere in very low resolution. If the user turned their head, you would need to send a signal to the remote to change the viewing box that gets high resolution. As a result, if you turned your head, you would see the new field, but very blurry, and after some amount of time — the round trip time plus the latency of the video codec — you would start seeing your view sharper. Reports on doing this say it’s pretty disconcerting, but more research is needed.

At the next level, you could send a larger region in high-def, at the cost of bandwidth. Then short movements of the head would still be good quality, particularly the most likely movements, which would be side to side movements of the head. It might be more acceptable if looking up or down is blurry, but looking left and right is not.

And of course, you could send the whole hemisphere, allowing most head motions but requiring a great deal of bandwidth. At least by today’s standards — in the future such bandwidth will be readily available.

If you want to look behind you, there you could just have cameras capturing the full sphere, and that would be best, but it’s probably acceptable to have servos move the camera, and also to not be sending the rear information. It takes time to turn your head, and that’s time to send signals to adjust the remote parameters or camera.

Still, all of this is more bandwidth than most people can get today, especially if we want lifelike resolution — 4K per eye or probably even greater. Hundreds of megabits. There are fiber operators selling such bandwidth, and Google fiber sells it cheap. It does not need to be symmetrical for most applications — more on that later.

Surrogates, etc.

At this point, you might be thinking of the not-very-exciting Bruce Willis movie “surrogates” where everybody just lay in bed all day controlling surrogate robots that were better looking versions of themselves. Those robot bodies passed on not just VR but touch and smell and taste — the works — by a neural interface. That’s science fiction, but a subset could be possible today.

Local robots

One place you can easily get that bandwidth is within a single building, or perhaps even a town. Within a short distance, it is possible to get very low latency, and in a neighbourhood you can get millisecond latency from the network. Low latency from the video codec means less compression in the codec, but that can be attained if you have lots of spare megabits to burst when the view moves, which you do.

So who would want to operate a VR robot that’s not that far from them? This disabled, and in particular the bedridden, which includes many seniors at the end of their lives. Such seniors might be trapped in bed, but if they can sit up and turn their heads, they could get a quality VR experience of the home they live in with their family, or the nursing home they move to. With the right data pipes, they could also be in a nursing home but get a quality VR experience of being the homes of nearby family. They could have multiple robots in houses with stairs to easily “move” from floor to floor.

What’s interesting is we could build this today, and soon we can build it pretty well.

What do others see?

One problem with using VR headsets with telepresence is a camera pointed at you sees you wearing a giant headset. That’s of limited use. Highly desired would be software that, using cameras inside the headset looking at the eyes, and a good captured model of the face, digitally remove the headset in a way that doesn’t look creepy. I believe such software is possible today with the right effort. It’s needed if people want VR based conferencing with real faces.

One alternative is to instead present an avatar, that doesn’t look fully real, but which offers all the expression of the operator. This is also doable, and Philip Rosedale’s “High Fidelity” business is aimed at just that. In particular, many seniors might be quite pleased at having an avatar that looks like a younger version of themselves, or even just a cleaned up version of their present age.

Another alternative is to use fairly small and light AR glasses. These could be small enough that you don’t mind seeing the other person wearing them and you are able to see their eyes direction, at most behind a tinted screen. That would provide less a sense of being there, but also might provide a more comfortable experience.

For those who can’t set up, experiments are needed to see if they can make a system to do this that isn’ t nausea inducing, as I suspect wearing VR that shifts your head angle will be. Anybody tried that?

Of course, the bedridden will be able to use VR for virtual space meetings with family and friends, just as the rest of the world will use them — still having these problems. You don’t need a robot in that case. But the robot gives you control of what happens on the other end. You can move around the real world and it makes a big difference.

Such systems might include some basic haptic feedback, allowing things like handshakes or basic feelings of touch, or even a hug. Corny as it sounds, people do interpret being squeezed by an actuator with emotion if it’s triggered by somebody on the other side. You could build the robot to accept a hug (arms around the screen) and activate compressed air pumps to squeeze the operator — this is also readily doable today.

Barring medical advances, many of us may sadly expect to spend some of their last months or years bedridden or housebound in a wheelchair. Perhaps they will adopt something like this, or even grander. And of course, even the able bodied will be keen to see what can be done with VR telepresence.

Deadlines approaching for Singularity U summer program and accelerator

The highlight and founding program of Singularity University, where I am chair of computing, is our summer program, now known as the Global Solutions Program. 80 students come from all over the world (only a tiny minority will be from the USA) to learn about the hottest rapidly changing technologies, and then join together with others to kickstart projects that have the potential to use those technologies to solve the world’s biggest problems.

This year is the 2nd year of a Google scholarship program, which means the program is free for those who are accepted. About 50 slots go to those scholarships, the other 30 go to winners of national competitions to attend. You can apply both ways. That means you can expect a class of great rising and already risen stars. I don’t like to exaggerate, but almost everybody who goes through it finds it life-changing.

If you are at a point where you are ready to do something new and big, and you want to understand how technology that keeps changing faster and faster works and how it can change the world and your world, look into it.

Learn about it and apply.

Also closing on Feb 19 is our accelerator program for existing or nascent startups. Applicants get $100K in seed funding, office space at Nasa Research Park and more through our network. You can read about it or Apply.