Understanding the huge gulf between the Tesla Autopilot and a real robocar, in light of the crash
It's not surprising there is huge debate about the fatal Tesla autopilot crash revealed to us last week. The big surprise to me is actually that Tesla and MobilEye stock seem entirely unaffected. For many years, one of the most common refrains I would hear in discussions about robocars was, "This is all great, but the first fatality and it's all over." I never believed it would all be over, but I didn't think there would barely be a blip.
There's been lots of blips in the press and online, of course, but most of it has had some pretty wrong assumptions. Tesla's autopilot is a distant cousin of a real robocar, and that would explain why the fatality is no big deal for the field, but the press shows that people don't know that.
Tesla's autopilot is really a fancy cruise control. It combines several key features from the ADAS (Advance Driver Assist) world, such as adaptive cruise control, lane-keeping and forward collision avoidance, among others. All these features have been in cars for years, and they are also combined in similar products in other cars, both commercial offerings and demonstrated prototypes. In fact, Honda promoted such a function over 10 years ago!
Tesla's autopilot primarily uses the MobilEye EyeQ3 camera, combined with radars and some ultrasonic sensors. It doesn't have a lidar (the gold standard in robocar sensors) and it doesn't use a map to help it understand the road and environment.
Most importantly, it is far from complete. There is tons of stuff it's not able to handle. Some of those things it can't do are known, some are unknown. Because of this, it is designed to only work under constant supervision by a driver. Tesla drivers get this explained in detail in their manual and when they turn on the autopilot.
ADAS cars are declared not to be self-driving cars in many state laws
This is nothing new -- lots of cars have lots of features to help drive (including the components used like cruise controls, each available on their own) which are not good enough to drive the car, and only are supposed to augment an alert driver, not replace one. Because car companies have been selling things like this for years, when the first robocar laws were drafted, they made sure there was a carve-out in the laws so that their systems would not be subject to the robocar regulations companies like Google wanted.
The Florida law, similar to other laws, says:
The term [Autonomous Vehicle] excludes a motor vehicle enabled with active safety systems or driver assistance systems, including, without limitation, a system to provide electronic blind spot assistance, crash avoidance, emergency braking, parking assistance, adaptive cruise control, lane keep assistance, lane departure warning, or traffic jam and queuing assistant, unless any such system alone or in combination with other systems enables the vehicle on which the technology is installed to drive without the active control or monitoring by a human operator.
The Tesla's failure to see the truck was not surprising
There's been a lot of writing (and I did some of it) about the particulars of the failure of Tesla's technology, and what might be done to fix it. That's an interesting topic, but it misses a very key point. Tesla's system did not fail. It operated within its design parameters, and according to the way Tesla describes it in its manuals and warnings. The Tesla system, not being a robocar system, has tons of stuff it does not properly detect. A truck crossing the road is just one of those things. It's also poor on stopped vehicles and many other situations.
Tesla could (and in time, will) fix the system's problem with cross traffic. (MobilEye itself has that planned for its EyeQ4 chip coming out in 2018, and freely admits that the EyeQ3 Tesla uses does not detect cross traffic well.) But fixing that problem would not change what the system is, and not change the need for constant monitoring that Tesla has always declared it to have. People must understand that the state of the art in camera systems is not today anywhere near the level needed for a robocar. That's why most advanced robocar research projects all use LIDAR. There are those (including Tesla and MobilEye) who hope that the day might come soon when a camera can do it, but that day is not yet here. As such, any camera based car is going to make mistakes like these. Fix this one and there will be another. While the Tesla autopilot failed to see the truck, that was an error, a tragic one, but not a failure of the autopilot. It was an expected limitation, one of many. The system performed as specified, and as described in the Tesla user manual and the warnings to drivers. I will say this again, the Tesla autopilot did not fail, it made an error expected under its design parameters.
The problem of getting too good -- and punishing that
No, the big issue here is not what the Tesla autopilot can't handle, but the opposite. The issue is that it's gotten good enough that people are mistaking it for a self-driving car, and they are taking their eyes off the road. Many stories abound of people doing e-mail, and there is an allegation that Brown, the deceased driver of this Tesla, might have been watching a movie. Movie or not, Brown was not paying attention, as he could have easily seen and braked in time to avoid hitting that truck.
Tesla warns pretty explicitly not to do that. Brown was a fairly skilled guy and he also should have known not to do that (if he did.) But as the Tesla has gotten better, there is no question that people are not heeding the warning and getting a bit reckless. And Tesla knows this of course.
This brings up all sorts of issues. Does it matter that Tesla knows that people are ignoring the warnings so long as it gives them sternly? Is there a duty of care to warn people that there is a danger you will ignore the warnings? Is there a duty of care to make sure (with technology) that they don't ignore them?
Perhaps you see the problem here -- the better the system gets, the more likely it is that it will make people complacent. People are stupid. They get away with something a few times and they imagine they will always get away with it. Older supervised products like cruise control needed your attention fairly often; there was no way you could watch a movie when using them. Cruise control needs your intervention to steer a few times a minute. Early autopilot like systems need your intervention every few minutes. But Tesla got good enough that on open highways, it might not need your intervention for hours, or the entire highway part of a trip. Eventually it will get to not needing it for days, or even a month. Who would not be tempted to look away for a little while, and then a lot, if it gets there -- or gets to needing only one intervention a year?
Our minds are bad at statistics. We human drivers have an tiny accident perhaps once every 10 years on average, and one that is reported to insurance about every 25 years. We have a fatal accident about every 8,000 years of average driving, closer to 20,000 years on the highway. a A rate of just one serious intervention every year sounds amazingly trustworthy. It seems like you took a bigger risk just getting on the road. But in fact, it's very poor performance. Most people agree to be a true robocar, we're going to need a car that makes an accident-causing mistake less often as a human, perhaps even twice as good or more. And the Tesla isn't even remotely close. Even Google, which is much, much closer, isn't there yet.
The incremental method
But we want systems to get better. It seems wrong to have to say that the better a system gets, the more dangerous it is. That a company should face extra liability for making the system better than the others. That's not the direction we want to go. It's definitely not the way that all the car companies want to go. They want to build their self-driving car in an evolutionary incremental way. They want to put supervised autopilots out on the road, and keep improving them until one day they wake up and say, "Hey, our system hasn't needed an intervention on this class of roads for a million miles!" That day they can consider making it a real robocar for those roads. That's different from companies like Google, Uber, Apple, Zoox and other non-car companies who want to aim directly for the final target, and are doing so with different development and testing methods.
Other views on the complacency issue
It should be noted that most other automakers who have made similar products have been much more keen on using tools to stop drivers from getting complacent and not being diligent in supervising. Some make you touch the wheel every few seconds. Some have experimented with cameras to watch the driver's eyes. GM has announced a "super cruise" product for higher end Cadillacs for several years, but every year pulled back on shipping it, not feeling they have sufficient "countermeasures" to stop accidents like the Tesla one.
Google famously did testing with their own employees of a car that was quite a bit superior to the Tesla back in 2012-2013. Even though Google was working on a car that would not need supervision on certain routes, they required their employees (regular employees, not those on the car team) to nonetheless pay attention. They found that in spite of their warnings, about a week into commuting, some of these employees would do things that made the car team uncomfortable. This is what led Google to decide to make a prototype with no steering wheel, setting the bar higher for the team, requiring the car to handle every situation and not depend on an unreliable human.
Putting people at risk
Tesla drivers are ignoring warnings and getting complacent, and putting themselves and others at risk. But cars are full of features that cause this. The Tesla, and many other sports cars, can accelerate like crazy. It can drive 150mph. And every car maker knows that people take their muscle cars and speed with them, sometimes excessively and recklessly. We don't blame the car makers for making cars capable of this, or for knowing this will happen. Car makers put in radios, knowing folks will get distracted fiddling with them. Car makers know many of their customers will drive their cars drunk, and kill lots of people. Generally, the law and society does not want to blame car makers for not having a breath alcohol test. People drive sleepy, and half-blind from cataracts, and we always blame the driver for being reckless.
There is some difference between enabling a driver to take risks, and encouraging complacency. Is that enough of a difference to change how we think about this? If we change how we think, how much will we slow down the development of technology that in the long term will prevent many accidents and save many lives?
In an earlier post I have suggested Tesla might use more "countermeasures" to assure drivers are being diligent. Other automakers have deployed or experimented with such systems, or even held back on releasing their similar products because they want more countermeasures. At the same time, many of the countermeasures are annoying and people worry they might discourage use of the autopilots. Indeed, my proposal, which states that if you fail to pay attention, your autopilot is disabled for the rest of the day, and eventually permanently, would frighten customers. They love their autopilot and would value it less if they had to worry about losing it. But such people are also the sort of people who are making the mistake of thinking a car that needs intervention once a year is essentially the same as a robocar.
The unwritten rules of the road
I have often mused on the fact that real driving involves breaking the rules of the road all the time, and this accident might be such a situation. While the final details are not in, it seems this intersection might not be a very safe one. It is possible, by the math of some writers, that the truck began its turn unable to see the Tesla, because there is a small crest in the road 1200' out. In addition, some allege the Tesla might have been speeding as fast as 90mph -- which is not hard to believe since the driver had many speeding tickets.
Normally, if you make an unprotected left turn, your duty is to yield right-of-way to oncoming traffic. Normally, the truck driver would be at fault. But perhaps the road was clear when he started turning, and the Tesla only appeared once he was in the turn? Perhaps the super wide median offers an opportunity the truck driver didn't take, to pause after starting the turn to check again, and the truck driver remains at fault for not double checking with such a big rig.
Ordinarily, if a truck did turn and not yield to oncoming traffic, no accident would happen. That's because the oncoming vehicle would see the truck, and the road has a lot of space for that vehicle to brake. Sure, the truck is in the wrong, but the driver of an oncoming vehicle facing a truck would have to be insane to proudly assert their right-of-way and plow into the truck. No sane and sober human driver would ever do that. As such, the intersection is actually safe, even without sufficient sightlines for a slow truck and fast car.
Because of that, real world driving involves stealing right-of-way all the time. People cut people off, nudge into lanes and get away with it because rational drivers yield the ROW that belongs to them. Sometimes this is even necessary to make the roads work.
With the Tesla driver inattentive, the worst happened. The Tesla's sensors were not good enough to detect the truck and brake for it, and the human wasn't insane, but wasn't looking at the road, so he didn't do the job of compensating for the autopilot's inadequacy. The result was death.
Measuring the safety of the AutoPilot
Tesla touted a misleading number, saying they had gone 130 million miles in autopilot before this first fatality. The reality is that on limited access freeways, the USA number for human drivers is one fataility per 180 million miles, not the 90 million they cited. The AutoPilot is used mostly on freeways where it functions best.
Tesla will perhaps eventually publish numbers to help judge the performance of the autopilot. Companies participating in the California self-driving car registration program are required to publish their numbers. Tesla particpates but publishes "zero" all the time because the AutoPilot does not qualify as a self-driving car.
Here are numbers we might like to see, perhaps broken down by classes or road, weather conditions, lighting conditions and more:
- Number of "safety" disengagements per mile, where a driver had to take the controls for a safety reason (as opposed to just coming to a traffic light or turn.)
- Number of safety disengagements which, after further analysis in simulator, would have resulted in an accident if the driver had not intervened.
- Number of "late" safety disengagements, or other indications of driver inattention.
- Number of disengagements triggered by the system rather than the driver.
The problem is Tesla doesn't really have a source for many of these numbers. Professional safety drivers always log why they disengage, and detailed car logs allow the building of more data. Tesla would need to record more data to report this.
Tesla with LIDAR
Folks in Silicon Valley have spotted what appear to be official Tesla test cars with LIDARs on them (one research LIDAR and one looking more like a production unit.) This would be a positive sign for them on the path to a full robocar. Tesla's problem is that until about 2018, there is not a cost-effective LIDAR they can ship in a car, and so their method of gathering data by observing customers drive doesn't work. They could equip a subset of special cars with LIDARs and offer them to a special exclusive set of customers -- Tesla drivers are such rapid fans they might even pay for the privilege of getting the special test car and trying it out. They would still need to supervise though, and that means finding a way around the complacency issue.
Comments
Andrew Clough
Fri, 2016-07-08 14:23
Permalink
Stocks
I'm not sure the lack of stock price movement is surprising. This was bound to happen sooner or later and it doesn't seem much worse happening now than 12 months down the rode. Stock markets aren't omniscient (they obviously couldn't tell which way the Brexit was going) but I expect them to have already priced in stuff as easy to predict as this.
But anyways I think your post did a good job of putting the situation in perspective.
Alex "some writ...
Fri, 2016-07-08 17:20
Permalink
Thank you, and correction...
Another excellent piece.
Also, the crest is approx 600 feet west of the impact point.
Thx,
Alex Roy
brad
Fri, 2016-07-08 17:26
Permalink
Source?
Several articles have written that it's 1200' away. Also a truck driver is high off the ground, and a truck is also high. Even with 600', an alert driver coming over the crest and seeing the truck would brake and even if they did not brake in time, they would slow down, giving the truck time to clear the intersection. At 600' it certainly seems possible the truck driver could have begun his turn thinking the oncoming lane was empty.
Stuart Lynne
Sat, 2016-07-09 01:41
Permalink
too fast?
From the various reports the Tesla was travelling at 85mph...
That would imply I think that the Tesla was perhaps 300-400 yards away from the intersection when then truck started turning. He may have thought he had time to get across (if he it the car he may have assumed it was travelling at the correct posted speed.)
The initial distance when the truck started turning would also have made it that much harder for the Tesla's sensors to detect. Although if as reported no brakes where ever applied it would seem that it never saw (or recognised the truck.)
Would it make sense for Tesla to limit the auto-pilot mode to a safer maximum speed?
brad
Sat, 2016-07-09 09:45
Permalink
Make sense
Difficult question. Slower is always safer, but society has not required cruise controls to be speed limited because of that. The driver is in charge and sets the speed, and takes the tickets.
The problem was not the speed. Had the driver been paying attention, he would have been able to stop for the truck even at a high speed. He does not seem to have been paying attention. That's the cause of most accidents, actually, not paying attention. The only real issue caused by the autopilot is the concern that the autopilot, being too good, led him to not pay attention in a major way, possibly even watching a movie.
James D
Sat, 2016-07-09 18:04
Permalink
Still not buying it
I think we agree about a lot of things. Forgive me for only talking about the points where we disagree.
So why is it surprising that stock effect was only a blip? Maybe it’s surprising because the robocar community has obsessed over this event for so long that it’s been magnified in their imaginations. I don’t find it surprising and the Tesla drivers I’ve talked to don’t find it surprising. The people who seem most surprised by it have all been Tesla critics and robot car pundits. Clearly there are various lines of thought on this topic.
However, when you encounter a surprising fact it’s often a sign that your model of reality is broken. If critics are surprised then this is a nice chance to rethink their analysis of the public’s responses and improve their predictions.
As for what this fatal accident says about autopilot - reasoning by anecdote is flawed. It is especially flawed when the most salient information about the anecdote is not available. Let me just point that out and move on.
Tesla drivers get complacent. So do drivers of all other cars. So do airline pilots and train engineers and surgeons and bomb disposal experts. It’s human nature to become complacent and if you’re going to fight that you have to pick your battles carefully. Maybe this is a good place to plant that flag, but any argument that doesn’t include a lot of numbers is not going to make that case. Tesla is encouraging complacency? By being good? By not punishing people or restricting the feature? There are restrictions on the use of autopilot, which appear to me to be well considered. It restricts usability by road type and speed and by driving conditions. It demands occasional confirmation responses from the driver and will shut down if the driver appears inattentive for more than a short period of time. Those restrictions have been carefully tightened some since the feature was first introduced and I wouldn’t be surprised if they get tightened again. Autopilot notices if you don’ have your hands on the wheel for an extended period, and it will notice if you do not respond when it prompts you to take action via display messages or audible alerts. I drove 400 miles with it yesterday on a great highway with ideal driving conditions in relatively light traffic and, while it did 95% of the driving I still had to interact with it around a hundred times. The idea that you can leave it to it’s own devices for long periods of time is just wrong.
To make the case that autopilot’s current restrictions are inadequate you need carefully thought out alternatives and data to show that your interventions result in increased safety. Half baked policies rarely work in the real world. Tesla probably has that data, and clearly they spend a lot of time thinking about this issue. I wish they would share that data and their thinking because I’m really interested. But I can also understand their reticence in a world of fervid clickbait headlines and gigantic, nervous competitors praying for their demise.
And speaking of tight-lipped companies; I’m even more interested in what Google is up to, but they talk a lot less than Tesla does and they show much less than Tesla does. It’s possible that Tesla has become our barometer for these technologies because they are actually out there. Autopilot is not Level 4, but there’s no Level 4 out there anywhere that people other than the developers can see, so of course we extrapolate from what we can get access too because the developers certainly aren’t providing any detail. Google is MIA on this and I’m sure they are experiencing some of the downsides to that policy.
By the way, I haven’t seen evidence that many people besides bloggers and media pundits are confusing autopilot for self-driving functionality. Certainly I’ve never personally encountered someone who was familiar with the difference between these two options who was confused about Tesla’s offering. But we see the two juxtaposed a lot because Tesla exists, and available Level 4 information is, so far, just hype, blog postings, and Ted Talks.
As for measuring safety, Tesla is clearly doing that. They’ve said they do it and they’ve released small amounts of information that corroborate it. Additionally we know the cars are thoroughly instrumented and that they report a lot of data in real time and in response to unexpected events. Whether and how autopilot stacks up against human drivers is not known outside of Tesla for now. This recent accident affects that only minimally since a single event does not produce useful statistics. The assertion that their statement about going 130 million miles without a fatality is misleading is on weak ground. For one thing, it’s a simple fact. It’s also a fact that you get one fatality for 90 million miles of driving in the U.S. Various sources have pointed out that statistics vary for different roads, and that autopilot is intended for, and presumably mainly used on roads that have relatively low fatality rates. But it’s also true that many roads where autopilot is used have much higher fatality rates than the quoted death-per-90MM figure. Rural high speed roads have accident rates up to 10x higher than limited access highway fatality rates and autopilot can certainly be used on these kind of roads. Even if usage on these roads is only 10% or 20% of autopilot usage it will have a large impact on the fatality rates. Notably this recent accident was on just such a high speed rural road, where fatality rates can exceed one every 20 million miles. Tesla has that data. The rest of us don’t. What is publicly known is not sufficient to either support or refute Tesla’s statements.
Finally, I can see why lidar is a very appealing sensor for these kind of applications, and why it used to be seen as nearly indispensable. It provides a kind of data that allows usefully independent corroboration of other sensor inputs in addition to providing data that is, until recently, very hard to extract from other sensors. But available lidar sensors also have a lot of limitations including cost, resolution, range, reliability, and maintainability. The advances in neural networks in the last few years when combined with the cost and quality of camera sensors is rapidly eroding lidar’s previous advantages. Personally I would welcome lidar, or even better, high frequency coherent radar which overcomes many of lidar’s deficiencies. But after studying the advances of the last few years I no longer believe it is a requirement. Recent advances in neural networking for perception, and for higher level situation analysis and decision making point to capabilities that change machine the equation radically.
brad
Sun, 2016-07-10 00:17
Permalink
Blips
As I said, I was never one of those to say the first accidents would destroy the industry, but I never met anybody predicting the opposite until Elon told Fortune that they did not disclose the crash because it was not material. (Note that Tesla’s disclosures previously did warn that a serious crash could have a material result, but disclosures are full of all sorts of CYA warnings.)
In spite of the reports, I know people who read their E-mail in their Teslas and otherwise disobey the warning, and there are many stories of people I don’t know who also do the same.
Google isn’t that tight-lipped, actually. They just don’t have a product to ship, so you don’t see it. They give talks at conferences all the time and reveal quite a bit, and they also do their incident reports with more detail than Tesla or anybody else.
I do agree it’s mostly media who we saw confuse the two — but how could we see anybody else but bloggers and pundits? You don’t see the writings of ordinary users much.
The difference between cameras and LIDARs is that LIDAR works today, in terms of detecting obstacles with the reliability necessary. Cameras don’t. One can make guesses about when cameras will, and make a bet that it’s soon, but it’s a bet. There is no bet required to know that LIDAR does detect the obstacles nor much of a bet that it will be economical soon. Indeed, radar would be great, but that’s also a bet right now.
James D
Tue, 2016-07-12 20:13
Permalink
Blips
Sorry if my comment came across as suggesting that you believed it would be armageddon. I meant to suggest that seeing it unlikely to be only a blip was an error. In the world most of the public lives in car accidents are, sadly, a common tragedy. In that world car builders are generally considered to be diligent in their responsibilities and the likelihood that a fatal accident was caused by gross negligence on the part of a car maker is not high. Would you expect GM's stock price to take a dip due to a single accident? This is perhaps not obvious, but car accidents are frequently understood to happen during a confluence of unfortunate choices and circumstances. That an accident occurs is not a smoking gun for liability for any party and often causes are not well understood until after extensive analysis, and sometimes not even then. That Tesla would be presumed to be at fault, liable, and that such liability would be large enough to threaten their viability is just understood to be the unlikely situation, not the likely one. That one might presume otherwise is, I suggest, a symptom of having over-thought the feared scenario for too long.
If you happen to have a good source of information on Google's technology I would appreciate it if you could share. I've been following this field closely for ten years and feel like I've not seen anything beyond superficial marketing and PR statements with regards to the functioning and actual capabilities of their system. Additionally they are reluctant bordering on compulsion to make any forward looking statements of any detail.
And yes, lidar exists, but not in a state that is appropriate for deployment in very large fleets of cost sensitive daily use vehicles. That problem is being worked on. The camera problem is also being worked on. It used to be obvious that lidar would win that competition but I think that outcome is now in doubt.
brad
Tue, 2016-07-12 22:49
Permalink
Not like a GM accident
The point is that you may know it was driver error, and I might know this, but the public -- based on press and blog traffic -- doesn't.
But more to the point, if GM had an accident and it was revealed that NHTSA and the NTSB were both investigating the accident looking for corporate responsibility (they don't deal with driver error) and the SEC was investigating, then I would certainly expect a blip in their stock. Tesla continues up (though not as up today as the market, so possibly a minor blip.)
Of course it's also true that even if Tesla pays a big fine and cripples functionality in the autopilot, it's still going to sell all the cars it makes. Most investors are worried if it can make enough, not if it can sell enough.
For Google info, aside from what I have posted here, look for videos of talks by Chris Urmson, Nathaniel Fairfield, Dave Ferguson and others.
James D
Tue, 2016-07-12 20:19
Permalink
material
Also, since you seem to have not heard this: Musk did not comment that the accident was not disclosed because it was immaterial. Musk stated that it was not disclosed and that it was immaterial, not that it was not disclosed because it was immaterial. He later clarified that they could not have disclosed that the accident was autopilot related before the offering because they did not know that autopilot was engaged during the accident until after the offering.
brad
Sun, 2016-07-10 12:26
Permalink
Stock changes
I will also add that whatever opinion one might have on whether such a crash would affect the stock price, I am baffled that there is also no blip from the fact that Tesla is facing both a NHTSA and NTSB investigation over this crash. Everything tells me that the market would normally discount the stocks due to the uncertainty over these investigations -- no matter how much you might think there is nothing that can happen, you can't predict what the agencies will do. In fact, it makes me wonder if Tesla wasn't on an upward trend for other reasons, and was kept flat by discounting around these events. Or if people, not seeing it drop, decided to pick some up because they had new confidence in it because of the lack of drop.
cxed
Sun, 2016-07-10 11:03
Permalink
The problem is bad goals
Thanks for your excellent analysis, Brad.
I know I'm in the minority, but I've always said the right way to evolve to full autonomous isn't with supercruise which demands that people drive without driving. The cars know where they are and what the roads are like. On some roads the chance of an accident is infinitesimal. Let people do their email on those roads and only those roads. That would encourage people to demand roads designed and managed for autonomous driving (no nasty left turning trucks to confuse things). I'm thinking about something like a freeway HOV lane that is separate from other lanes. Once certified for hands off driving the dream could really be real right now. People say that accomodating true autonomous systems by infrastructure changes is cost-prohibitive. I say that wasting your life driving a car is way more cost prohibitive yet it's the norm. Also clearly we're capable of serious infrastructure improvements if the reward is great enough (e.g. railroads in the 19th century and paved roads and interstate freeways in the 20th). It's not like our road infrastructure couldn't use a complete overhaul anyway.
On the topic of this accident (perhaps), if the speed is 90mph and the "driver" is watching a video, I'm not very sympathetic. One of the great promises of autonomous cars is that you can cut the normal travel speed in half and yet still massively increase your productive life. If I was given the chance to make all future car trips at 1/2 the speed I'd normally drive them, but I could read a book or whatever, I'd absolutely take that offer. Why someone would set the thing to 90 or 85mph and think to not pay attention is crazy. If he set it to 45mph, that'd be more understandable and he'd probably be alive to explain the crash.
Glenn Mercer
Tue, 2016-07-12 15:24
Permalink
(Yet another) opinion on the "why no share price blip" puzzle
I am no expert in what drives share prices (I wish I knew someone who was!), but if I step back from the share price to the general news coverage (sort of one degree of separation) I think a reason for no media explosion (no "Hindenburg Moment") around the Florida event is the nature of the crash. That is, "unfortunate driver on highway drives into truck, and is killed." That is tragic of course, but I think the "HM" comes when the car involved in the incident kills someone else (in another car, or worse, a pedestrian). That scenario invokes all sorts of Frankenstein/Frankencar, Our Robot Masters, death-by-AI SkyNet imagery. Throw in that a Tesla costs 3x as much as the average vehicle and we can get some class resentment as well. So while I do think a true HM would damage the autonomous-vehicle trajectory, and maybe the TSLA price, I think we haven't seen one yet, and with luck we won't. As long as the perception is "dumb rich guy kills himself in his expensive toy" (please don't flame me, I am not calling the driver dumb, or rich, or the Tesla a toy, I am commenting on the possible public perception!) then media outrage is muted, and to the extent that moves the share price, the share price bump is muted. Watch out, however (and I hate being this morbid) if the car hurtles off the road and nails an innocent bystander.
brad
Tue, 2016-07-12 16:30
Permalink
It would be worse
While I agree, hurting a bystander would get more attention and criticism -- and you could make it worse and have it be a child -- the lack of blip here tells me that while there will be a blip in that situation, it seems less and less likely it would be a Hindenberg moment. Hindenberg moments do happen, but they are pretty rare. The worst I see now would be the destruction of a particular project, or a particular technological method.
Glenn Mercer
Wed, 2016-07-13 11:04
Permalink
Alternative to Hindenburg (sic)
I agree they are rare, and I agree the impact (my bad choice of words!) might just be on a project or a method. Perhaps more likely than a full HM is a slow near-death-by-liability-lawsuits, as happened to general aviation until the Feds gave it some immunity. See:
https://www.wikiwand.com/en/General_Aviation_Revitalization_Act
But again, I don't really know what I am talking about.
Yet I am just concerned that putative "Silicon Valley over-confidence" (probably a meme as opposed to a reality) will, if it is not careful, stroll right into the American trial-lawyer meat grinder (a reality, not a meme!). I believe it is possibly counter-productive, in light of this, for Certain People to make statements that appear (sic) to "blame the victim" -- juries do NOT like that (even when the blame is accurately assigned).
Remember that Toyota paid some $3 billion in recalls and fines for its sudden unintended acceleration problem, which in virtually every case could be averted by either shifting the car into neutral or holding down the power button long enough to turn off the engine. And Toyota didn't dare breathe a peep about this being the drivers' fault.
Oh well, fretting ain't gonna make anything better, so I will stop fretting.
Isabel Draves
Wed, 2016-07-13 06:17
Permalink
Yep
As a bystander, I agree that a perception that "a car- obsessed You Tube show off with a history of testing his car's limits, killing himself by watching a movie while driving 85 mph" would not cause eager investors to worry about an investigation. Also I recall a somewhat similar accident where no autopilot was used, years ago on route two in Concord MA where a truck driver ignored his red light as a woman drove across the highway with the right of way, which is just to say crashes happen and people know that.
The problem will occur when someone is using autopilot as prescribed and dies or kills someone else. But even that, people are already expecting.
brad
Wed, 2016-07-13 13:41
Permalink
That might be what it is
It might be "a car obsessed...." but we don't know that for sure, and the public certainly doesn't know it for sure, and it might not even be true.
Haslett Milon
Wed, 2016-11-02 16:05
Permalink
Today, I was at a karate
Today, I was at a karate demonstration and when someone asked why we started karate my friend, who is a major tough guy said,.well when I started I wanted to be a power ranger and I feel I have accomplished my goal..
Add new comment