Tesla's use of the phrase "beta test."

Topic: 

Some of the reaction to the story of the lawsuit against Tesla came from Tesla's declaration that Autopilot is a product in "beta test."

I don't think that's actually true. I think it's a misuse of that phrase by Tesla to communicate something that is true -- "This product isn't finished, expect it to have bugs."

The problem is that almost no software product is ever "finished." And even once finished, they almost always have bugs.

Silicon Valley has gotten into a bad habit which might be called "perpetual beta" because of this. One of the most famous examples was GMail, which declared itself to be in beta even after many years and hundreds of millions of "beta" users. Tons of projects today never reach "version 1.0" which used to mean release and the end of the first beta.

A beta test is normally an attempt to try an almost-finished product on a limited number of real users, to see how they react, and to see what problems they find. It can happen before a product is released, and it also happens on new releases, while most users stay with the "stable" or "release" version, a few users -- usually not paying -- agree to try the unstable beta version with more features and more bugs. They are expected to report promptly any problems they find. People participate in beta tests both to get the software free and to get early access to new features they need.

At least, that's what a beta test used to mean.

Now, it is true that Tesla drivers are part of the testing of Tesla Autopilot, both when they report bugs, and when their cars report them directly. But that's true of every product today, even after release. A proper beta tester is actually deliberately trying to find bugs in the system, stressing it. And a beta tester does not use the test product for mission critical or life critical things.

Alpha testing, for those who don't know, is the internal testing done by employees or certain very close associates before beta. Beta has to be with real users. No battle plan survives first contact with the user.

Several readers have remarked in anger that Tesla is "beta testing" on unsuspecting members of the public. Not simply on Tesla drivers, but on other people sharing the road. That would be beyond the normal scope of beta testing.

Tesla Autopilot costs a lot of money. Nonetheless, I suspect most customers buy it, though recently became included in the price of the car. Tesla does have an "early access" program for some features for a limited number of customers. That is actual beta testing.

As I said, Tesla calls it beta testing because they are looking for ways to make it clear to drivers, "This will crash if you don't watch it carefully." Because, while they would never admit it, they are aware that there is a risk of people getting complacent. They knew long ago that they would get sued like this, and they want every defence against the "failure to warn of defect" doctrine in product liability. If you are a beta tester, it's very hard to claim they failed to warn you that there were bugs.

Testing in general

This doesn't answer the complaint that some people don't like the idea of car driving technology being tested on public roads at all, beta or otherwise. They don't even like it with trained safety drivers, and they don't like it at all with untrained Tesla owners.

Tesla's answer to that is to cite their accident record. I am not sure their statistics are given properly, and they refuse to give me the real numbers in spite of many attempts. However, it is still clear that Autopilot testing is not especially dangerous to both drivers and other road users. It's not like it's several times as dangerous, for example; and they say that on average it's safer.

The public does have a right to control how much risk people and companies expose the general public to. All driving exposes the others on (or near) the road to risk. The question is, how much risk, and for what goal? We accept a fairly high amount of risk from ordinary driving for the goal of "getting places faster and more conveniently." We accept the high risk of putting student drivers on the road for the goal of turning them into better drivers later in life. How much risk should we tolerate for the goal of "improving a product to the point that risk in the future is vastly less."

Comments

I sometimes hear complaints about FSD programs, which say "Why don't they fully test it before they put it on the road with the rest of us? Why are we expected to be guinea pigs?" Relatively few people understand that it's impossible to "fully test" such systems without putting them into the real world. Or understand that there's a real urgency to getting FSD systems developed - every year we delay deployment means thousands of preventable deaths.

There's also lots of ignorance about the difference between driver-assist features and a full-blown FSD system. Whenever someone crashes while driving with Autopilot, they vow they'll never get into one of those "self-driving cars." There's also no appreciation for the fact that there are many different versions of both types if system in development, with different levels of capability achieved.

Add new comment