Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Robocars European Tour: London, Milan, Lecce, Vienna, Budapest

I’ll be back and forth to Europe in the next month giving a number of talks, mostly about robocars. Catch me at the following events:

  • Wired 2013 UK in London, where 4 Singularity U speakers will do an hour, including me — Oct 17-18. Looks like a great speaker list.
  • Frontiers of Interaction in Milan, Oct 25 — Design, Technology and Interactive.
  • TEDx Lecce in Lecce (boot heel of Italia) on Oct 26 — a major TEDx event with many international speakers.
  • Pioneers Festival in Vienna, Oct 30-31. Reports are this event is great, with an amazing venue. I’ll be interviewed on EFF topics and car topics there.

Singularity University Summit (Europe)

And the big event is the Singularity University Europe Summit a combination of the popular Singularity Summit series and the Singularity University Program. Most of our great faculty will be there for two days in Budapest, November 15-16. Readers of this blog can get a 10% discount by using the promo code “Bradbudapest” when registering. Expect a mini-reunion of a number of our European alumni there. To toot our own horn, the majority of folks who come out of our programs call it the best program they’ve ever been to. At the Franz Liszt Academy of Music in the core of town.

Enough with the Trolley problem, already

More and more often in mainstream articles about robocars, I am seeing an expression of variations of the classic 1960s “Trolley Problem.” The latest is this article on the Atlantic website. In the classical Trolley problem, you see a train hurtling down the track about to run over 5 people, and you can switch the train to another track where it will kill one person. There are a number of variations, meant to examine our views on the morality and ethics of letting people die vs. actively participating in their deaths, even deliberately killing them to save others.

Often this is mapped into the robocar world by considering a car which is forced to run over somebody, and has to choose who to run over. Choices suggested include deciding between:

  • One person and two
  • A child and an adult
  • A person and a dog
  • A person without right-of-way vs others who have it
  • A deer vs. adding risk by swerving around it into the oncoming lane
  • The occupant or owner of the car vs. a bystander on the street
  • The destruction of an empty car vs. injury to a person who should not be on the road, but is.

I don’t want to pretend that this isn’t an morbidly fascinating moral area, and it will indeed affect the law, liability and public perception. And at some point, programmers will evaluate these scenarios in their efforts. What I reject is the suggestion that this is anywhere high on the list of important issues and questions. I think it’s high on the list of questions that are interesting for philosophical class debate, but that’s not the same as reality.

In reality, such choices are extremely rare. How often have you had to make such a decision, or heard of somebody making one? Ideal handling of such situations is difficult to decide, but there are many other issues to decide as well.

Secondly, in the rare situations where a human encounters such a moral dilemma, that person does not sit there and have an inner philosophical dialogue on which is the most moral choice. Rather, they will go with a quick gut reaction, which is based on their character and their past thinking on such situations. Or it may not be that well based on them — it must be done quickly. A robot may be incapable of having a deep internal philosophical debate, and as such the robots will also make decisions based on their “gut,” which is to say the way they were programmed, well in advance of the event.

Focus on the trolley problem creates, to some irony, a meta-trolley problem. If people (especially lawyers advising companies or lawmakers) start expressing the view that “we can’t deploy this technology until we have a satisfactory answer to this quandry” then they face the reality that if the technology is indeed life-saving, then people will die through their advised inaction who could have been saved, in order to be sure to save the right people in the complex situations. Of course, the problem itself speaks mostly about the difference between failure to save and overt action to harm.

I suspect companies will take very conservative decisions here, as advised by their lawyers, and they will mostly base things on the rules of the road. If there is a scenario where the car would hit somebody who actually has the right-of-way, the teams will look for a solution to that. They won’t go around a blind corner so fast they could hit a slow car or cyclist. (Humans go around blind corners too fast all the time, and usually get away with it.) They won’t swerve into oncoming lanes, even ones that appear to be empty, because society will heavily punish a car deliberately leaving its right-of-way if it ends up hurting somebody. If society wants a different result here, it will need to clarify the rules. The hard fact of the liability system is that a car facing 5 jaywalking pedestrians that swerves into the oncoming lane and hits a solo driver who was properly in her lane will face a huge liability for having left their lane, while if it hits the surprise jaywalkers, the liability is likely to be much less, or even zero, due to their personal responsibility. The programmers normally won’t be making that decision, the law already makes it. When they find cases where the law and precedent don’t offer any guidance, they will probably take the conservative decision, and also push for it to give that guidance. The situations will be so rare, however, that a reasonable judgement will be to not wait on getting an answer.

Real human driving does include a lot of breaking the law. There is speeding of course. There’s aggressively getting your share in merges, 4-way stops and 3-point turns. And a whole lot more. Over time, the law should evolve to deal with these questions, and make it possible for the cars to compete on an equivalent level with the humans.

Swerving is particularly troublesome as an answer, because the cars are not designed to drive on the sidewalk, shoulder or in the oncoming lane. Oh, they will have some effort put into that, but these “you should not be doing this” situations will not get anywhere near the care and testing that ordinary driving in your proper right-of-way will get. As such, while the vehicles will have very good confidence it detecting obstacles in the places they should go, they will not be nearly as sure about their perceptions of obstacles where they can’t legally go. A car won’t be as good at identifying pedestrians on the sidewalk because it should never, or almost never drive on the sidewalk. It will instead be very good at identifying pedestrians in crosswalks or on the road. Faced with the option to avoid something by swerving onto the sidewalk, programmers will have to consider that the car can’t be quite as confident it is safe to do this illegal move, even if the sidewalk is in fact perfectly clear to the human eye. (Humans are general purpose perception systems and can identify things on the sidewalk as readily as they can spot them on the road.)

We also have to understand that humans have so many accidents, that as a society we’ve come to just accept them as a fact of driving, and built a giant insurance system to arrange financial compensation for the huge volume of torts created. If we tried to resolve every car accident in the courts instead of by insurance, we would vastly increase the cost of accidents. In some places, governments have moved to no-fault claim laws because they realize that battling over something that happens so often is counterproductive, especially when from the standpoint of the insurers, it changes nothing to tweak which insurance company will pay on a case by case basis. In New Zealand, they went so far as to just eliminate liability in accidents, since in all cases the government health or auto insurance always paid every bill, funded by taxes. (This does not stop people having to fight the Accident Compensation Crown Corporation to get their claims approved, however.)

While the insurance industry total size will dwindle if robocars reduce accident rates, there are still lots of insurance programs out there that handle much smaller risks just fine, so I don’t believe insurance is going away as a solution to this problem, even if it gets smaller.

Locking devices down too hard, and other tales of broken phones

One day I noticed my nice 7 month old Nexus 4 had a think crack on the screen. Not sure where it came from, but my old Nexus One had had a similar crack and when it was on you barely saw it and the phone worked fine, so I wasn’t scared — until I saw that the crack stopped the digitizer from recognizing my finger in a band in the middle of the screen. A band which included dots from my “unlock” code.

And so, while the phone worked fine, you could not unlock it. That was bad news because with 4.3, the Android team had done a lot of work to make sure unlocked phones are secure if people randomly pick them up. As I’ll explain in more detail, you really can’t unlock it. And while it’s locked, it won’t respond to USB commands either. I had enabled debugging some time ago, but either that doesn’t work unlocked or that state had been reset in a system update.

No unlocking meant no backing up the things that Google doesn’t back up for you. It backs up a lot, these days, but there’s still dozens of settings, lots of app data, logs of calls and texts, your app screen layout and much more that’s lost.

I could repair the phone — but when LG designed this phone they merged the digitizer and screen, so the repair is $180, and the parts take weeks to come in at most shops. Problem is, you can now buy a new Nexus 4 for just $199 (which is a truly great price for an unlocked phone) or the larger model I have for $249. Since the phone still has some uses, it makes much more sense to get a new one than to repair, other than to get that lost data. But more to the point, it’s been 7 months and there are newer, hotter phones out there! So I eventually got a new phone.

But first I did restore functionality on the N4 by doing a factory wipe. That’s possible without the screen, and the wiped phone has no lock code. It’s actually possible to use quite a bit of the phone. Typing is a pain since a few letters on the right don’t register but you can get them by rotating. You would not want to use this long term, but many apps are quite usable, such as maps and in particular eBook reading — for cheap I have a nice small eBook reader. And you can make and receive calls. (Even on the locked phone I could receive a call somebody made to me — it was the only thing it could do.) In addition, by connecting a bluetooth mouse and keyboard, I could use the phone fully — this was essential for setting the phone up again, where the lack of that region on the touchpad would have made it impossible.

One of my security maxims is “Every security system ends up blocking legitimate users, often more than it blocks out the bad guys.” I got bitten by that.  read more »