Driving without a map is another example of being cheap rather than being safe
There was a lot of buzz yesterday about publication by a team at CSAIL and MIT about their research on driving without a map.
Rather than describing a big breakthrough, what is described is fairly similar to the work done in the first two DARPA Grand Challenges, which by 2005 had the winning cars driving a 150 mile course through desert roads with just GPS waypoints. (The CMU teams did try to do some rough mapping in the 4 hour window after getting the waypoints.) Because humans can drive without a map, why can't our robots?
I have discussed the merits of driving without a map before. Any car that can drive without a map is a great resource for building a map -- why would you want to throw away the useful information from another car that drove the road before you, with the addition of the ability to process it with as much CPU as you want, and to see it from different vantage points? It's a huge win.
It is also important to understand that cars that drive with maps still must function when their map is wrong because the world changes underneath the map. It is still more capable with an old, wrong map (whose flaws are, as it turns out, obvious) than if it makes zero use of prior information. Here's a post on this.
But I want to discuss the real flaw in this logic, which I see manifest in several other areas of development.
This is the wrong time to make robocar driving cheaper
The mistake is the natural instinct everybody has to do things at lower cost. If something is expensive, like mapping, bandwidth or LIDARs, we all automatically want to think of how we might eliminate it, or make it cheaper. That's a good instinct, and in a number of years, that will be an important thing to do. Today, if there is a choice between being cheap and being safe sooner, then being safe is generally the way to go.
For all the teams trying to win the robocar market, getting to market is paramount, and that means getting to the level of safety necessary to go to market, and operating at that level of safety. The early prototypes and pilot projects should not decide to make their vehicle less safe just to save money. Almost any amount of money.
That's why you've seen most cars using a $75,000 Velodyne LIDAR. That cost is too high, though everybody knows that by production time, much cheaper LIDARs will be available (they already are.) But some think even $5,000 LIDARs are too expensive, and hope to drive with just cameras. I say it would be crazy to give up any truly useful sensing ability just to save $5,000 in the first five years of operation.
The logic is different for taxi fleets and cars-for-sale. Car makers are used to living in a world where adding $5,000 to the parts cost for a car adds $10,000 or more to the retail price and puts it in an entirely different car class. They have the intuition to be cheap pushed into them hard. For a taxi fleet, an extra $5K per taxi is not a big deal in the early years, where the issue is getting out there and getting customers, not being the cheapest. In the 250,000 mile life of a taxi, it's 2 cents extra per mile. When you are taking taxi rides down from $2/mile to under $1, eliminating the 30-90 cent/mile driver, that's not a dealbreaker.
Once there is a large market, with many competitors, then you can start competing on price. Then the ability to be a few cents cheaper will make the difference. In the early days, being safer, sooner is what will make the difference.
The article cited above makes another strange mistake. It believes that the size of maps is a big cost issue.
“Maps for even a small city tend to be gigabytes; to scale to the whole country, you’d need incredibly high-speed connections and massive servers,” says Teddy Ort of CSAIL.
Oh no! Gigabytes! If you can even get a drive that is that small. Cars will easily be able to store the base maps of 99.9% of the places they are going to go, updating them from time to time when connected by wifi or similar. If a car needs to drive a road entirely outside its predicted range, it only needs to download just those streets as it gets close to them, and updates for any streets along its specific route. The data needs are quite modest, easily handled with 3G networks, let alone 5G.
If a car should somehow need to transfer terabytes of data, it has an ability that ordinary computers don't -- it's a robot, and can drive when empty to a location where it can do that, even if it doesn't have wifi where it otherwise parks. It can even drive to a drive swap station if truly desired. Amusingly, if you want to send a petabyte across town, it might be the cheapest and fastest way to send it would be with a robocar carrying a box of drives.
The same error is made by MobilEye with their REM mapping plan. With REM, they have proudly announced that they have made map data and updates from cars super-small, so they can be constantly updated over mobile networks. That's not a bad thing per se, but betting that bandwidth is going to be expensive has rarely been the right bet. Obviously, if you can use smaller files with zero cost in capability or safety, do it. But there's usually a tradeoff.
Mapping is expensive. In particular, since the AI needed to fully understand the road is not yet ready, most teams want human review of the maps generated by their AIs. The actual driving of the roads to gather the data is expensive if you pay people to do it, but once you have a large enough fleet of cars out there, I don't think the cost will be a major burden. Some day -- but not on day one -- an ability to do the rarely used roads that are uneconomical to map will be very worthwhile. But remember, those roads are the uneconomical ones, so demand is inherently low, and the people who want to drive them will be willing to manually drive them once for a mapping pass.
As noted, the most common instance of this error has been the effort by Tesla and a few other teams to work on cars that don't use LIDAR. This is a risky bet. To drive without LIDAR requires a real computer vision breakthrough. That breakthrough will come, but nobody knows when, and even when it does come, I still believe that the breakthrough camera plus LIDAR will still be a bit better. Nobody who uses LIDAR is ignoring the computer vision. If you win the bet, you may be a bit cheaper. That might help you if you are Tesla, selling cars to end-users. But it's a risky bet, because if you lose, you are way behind.
Comments
gwern
Tue, 2018-05-08 20:43
Permalink
Maybe it would be easier to
Maybe it would be easier to make the point about 'penny wise, pound cheap' by explicitly including mortality as a cost of operation.
So for example, let's say each death costs $10m (lawsuits & settlement, legal expenses, reputation, investigation, delays... statistical value of life tends to be $5-10m, so this seems like a good guess.). A roughly human-equivalent self-driving car might have a fatal accident every, what was the estimate for humans, 1 million miles? A taxi-style car might get 0.5m miles before needing to be junked. If that car costs $100k to make and operate because of a fancy LIDAR and expensive mapping infrastructure, its cost per mile is ($100k + 0.5*1*10m)/0.5m = $10.2/mile. Suppose you save $30k off that $100k manufacturing by using cameras only but this comes at the cost of doubling risk. (Possibly too optimistic given the current Uber/Tesla record.) Now your cost per mile is ($70k + 1*1*10m)/0.5m = $20.14/mile. The manufacturing savings double your cost. In other words, the mortality cost is the overwhelmingly dominant cost and the manufacturing costs are a minor input. (Indeed, even if the LIDAR could be scrapped with no effect on safety, the cost saving is only to $10.14/mile.) I think you have to get to well above 1 death/10 million miles before decreases in car manufacturing cost really start to move the needle.
brad
Tue, 2018-05-08 23:47
Permalink
Some math problems
First of all, a fatality is every 80M miles. Modern insurance costs about 6 cents/mile but it doesn't pay all the costs of accidents. Robocar companies will probably pay a larger portion.
But I refer not to the marginal risk cost, but the threshold cost. For every team, there is a day when the board of directors listens to the presentation on safety and says, "OK, we are good to release." That's a binary day, and it factors in the risks to the whole company and the program from serious accidents, which probably are more than the risk to the public.
Mårten Thornberg
Thu, 2018-05-10 16:51
Permalink
The value of life
Today, as long as manufacturers have the eyes of the world on them, the tolerance for accidents is very low. However, in a future where human drivers are a minority, perhaps not even allowed on public roads, the acceptable accident rate will be governed by economical arguments like these. I.e. it will be up to policymakers to set the bar for what accident rate is tolerable. That might be a bit dangerous since there will probably always be a pressure from the manufacturers to reduce the marginal cost and thus to lower the bar.
James Salsman
Fri, 2018-05-11 16:39
Permalink
Mårten, those pressures will
Mårten, those pressures will be offset by insurance costs, as they are in all insurance markets.
Mårten Thornberg
Sat, 2018-05-12 05:16
Permalink
Insurance cost
Assuming there is something to insure against! The real cost of an accident are an externality. Unless legislators regulate the market to properly internalize the costs, the cost of an accident will be way too low. Today it's less of a problem since in every car there is a human driver who stakes his own life on not getting into an accident. The incentive to avoid a crash for a robocar operator will be a lot less (i.e. only the cost of repairing the car), unless there is proper legislation. The problem is, there will be pressure on legislators from the industry to not internalise those costs. (Typically the industry lobby for less regulation, it's in their interest to make accidents as cheap as possible.)
Anthony
Sat, 2018-05-12 07:20
Permalink
Externality?
In what way is the real cost of an accident an externality? I assume the rise of robocars will mean a movement away from suing drivers under whatever the state's motor vehicle tort system is and toward suing manufacturers for products liability. Manufacturers will have much deeper pockets than most drivers, and eventually the requirement for "drivers" (really, operators) of driverless robocars to maintain insurance might go away completely, possibly replaced by a requirement for manufacturers to provide insurance or proof of ability to pay out $X in claims through self-insurance.
Mårten Thornberg
Sat, 2018-05-12 09:11
Permalink
An externality is:
An externality is: "a consequence of an industrial or commercial activity which affects other parties without this being reflected in market prices, such as the pollination of surrounding crops by bees kept for honey." (from google's dictionary). Another common example is pollution.
A car accident may cause damage to other people and property. If the operator (and by extension the manufacturer, since the operator of the robocar is their customer) doesn't have to pay the full cost of the damages caused, those costs are externalities. Proponents of market economy often suggest that the way to fix this is by creating laws/taxes/regulations that ensure that the operators pay for the full cost of their business operation. I.e. in this case it would be the property damage, the hospital costs, pain and suffering, reduce in income due to long term or permanent injury, and so on. The different legal costs you mention are examples of such regulations.
For example, LIDAR makes the cars safer, but the LIDAR sensor is expensive. It becomes cheaper for the manufacturers to remove the LIDAR if the increase in accident rate (and associated legal and material costs) is lower than the cost of using the LIDAR sensors. The manufacturers will choose the cheaper solution, since they always try to minimise marginal cost.
The major problem is that there are two ways for the industry to lower the costs (for them) associated with accidents.
1. Make the cars safer.
2. Lobby the government to change laws in such a way that accidents become less expensive for them. (I.e. so that they do not pay the full cost the accidents causes.) The industry naturally doesn't want to pay all the costs associated with their operations if they can avoid it.
I dare say most people would agree that solution 2 is undesirable. As long as they don't pay the full cost for the accidents, there will not be enough incentive to make the cars as safe as they should be.
Today the situation is a little different. Although costs associated with car accidents are still externalities, there is a human driver in every car who also put his own life at stake when driving. So there is a very real and powerful incentive for every driver to do his best to avoid accidents. That will not be the case in the future when the owner/operator doesn't sit in the car. I also suspect private ownership of cars will be less common, and people will tend to use taxi services instead. In that case you also have parties with very unequal economic, legal and political power, i.e. the operators of the robocars will be big businesses while the victims are likely to be ordinary private individuals.
Anthony
Sat, 2018-05-12 10:20
Permalink
Where is the externality, not what does the word mean
The part I don't understand is why you think people don't pay the full cost of their "accidents." You might mean because people are generally underinsured, but robocar companies likely will have deep pockets. You might mean because of no-fault liability laws and tort reform limits, but products liability suits are generally exempt from no-fault laws and even most tort reform limits.
Anthony
Sat, 2018-05-12 10:50
Permalink
externalities
"creating laws/taxes/regulations that ensure that the operators pay for the full cost of their business operation"
I guess this is the part I don't understand. We already have laws that require people to pay for the injuries that they cause to others. In some states there are laws which *create* externalities (such as "no-fault" laws which limit the ability to sue someone who causes a motor vehicle accident), but these laws generally don't apply in the case where you sue the manufacturer of the car rather than the operator.
When you are hit by a robocar because the robocar did the wrong thing, maybe you sue the operator under your state's tort system (maybe not if it's a no-fault state), but you *definitely* sue the manufacturer under the theory of negligence or strict products liability. If lack of LIDAR was the cause of the crash, you sue the manufacturer for not having LIDAR, under the theory that lack of LIDAR was negligent or was a design defect.
Mårten Thornberg
Sat, 2018-05-12 21:15
Permalink
The law is different in every
The law is different in every jurisdiction and whether one think all the costs from an accident are fully internalised right now doesn't really matter in this case. The point is that in the future there might be a constant pressure on lawmakers to make accidents lest expensive for the robocar operator. That in turn means the externalities might grow, and the incentive to create safe cars will become smaller than it should be.
brad
Sat, 2018-05-12 21:34
Permalink
Possibly
But frankly, this doesn't make regulations an answer, as the pressure on lawmakers is even more effective at getting them to both weaken the regulations to improve company profits, and strengthen them to keep new players out of the space.
Anthony
Sun, 2018-05-13 06:13
Permalink
laws
Yes, laws are different everywhere. Under the common law, as far as I'm aware, there's no externality so long as the manufacturers have deep enough pockets. Lawmakers of course can introduce externalities, and it's possible that they will.
Overall I expect the move from operator liability to manufacturer liability will make things much better. Right now if you get permanently injured in a car crash and the driver is not extremely wealthy and only has the state-required minimum insurance, you're almost certain not to receive a payout equal to even your monetary losses, let alone your pain and suffering. Back injuries which ruin lives are fairly common, and usually go uncompensated. Now, change the defendant from some average income schmoe with minimal-state-mandated insurance to a large manufacturer, and you're much more likely to get the large settlement you deserve.
Lawmakers might be tempted to change that. And, frankly, it might make sense to do so. When you drive a car on the roads you assume the risk that others are going to make mistakes. If lawmakers do decide to limit liabilities, they should co-currently introduce regulations, though, because that does create externalities.
Mårten Thornberg
Sun, 2018-05-13 14:21
Permalink
What you said seems
What you said seems contradictory to me:
* "as far as I'm aware, there's no externality"
* "Right now [...] you're almost certain not to receive a payout equal to even your monetary losses, let alone your pain and suffering."
If whoever caused the accident does not pay for all the damages caused, that is a classic example of an externality. That means the car operator has less economic incentive to reduce the risk of accidents than he should have had. If the operator do pay for all the damages, then it will be more economical for him to drive safely and choose a safer car in the future, and the manufacturers will have more incentive to produce safer cars. I.e. the "invisible hand" ensures the manufacturers makes the cars as safe as they should be.
brad
Sun, 2018-05-13 14:42
Permalink
You are correct
It is quite probable the damages should be adjusted up. However, they should be adjusted up for all accidents, not just those caused by robots. That, however, might be politically difficult.
One big factor which keeps damages low is the insurance industry, which after all, pays for almost all of them. They are motivated to keep them low. Because they handle effectively all car crashes they have a great deal of power to do so. Robocars will mostly self-insure, and also want to keep payouts low (after customer confidence gets high enough.)
One way the insurance industry keeps it low is that they are often on both sides of the table, and they also have built a large system so that you don't get more than is covered. While they do not directly mind if the total damages exceed their coverage amount, that's much more difficult to collect. Somebody hits you, their insurance offers you a nice tidy sum without need to sue. Want more you have to sue.
Anthony
Tue, 2018-05-08 21:20
Permalink
LIDAR
"To drive without LIDAR requires a real computer vision breakthrough." Musk's argument is that a real computer vision breakthrough is needed anyway, at least if you want to be able to drive in adverse conditions like heavy rain or snow.
Interestingly, if Uber's fatal crash winds up being the result of the car having to ignore the LIDAR detection of the pedestrian because of LIDAR being prone to false positives, I think this will be a good example of why LIDAR is just a crutch. If it's not reasonable (because of false positives) to slam on the brakes every single time LIDAR detects something in front of the car that *could* be a person, then the LIDAR system isn't really adding to the safety of the system, is it?
As far as cost vs. safety, right now Tesla's way of playing it safe is to do the vast majority of their on-the-road data collection in "shadow mode." So it's not really about them being cheap rather than safe. In fact I'd say their strategy is probably safer, and definitely safer if they add on LIDAR *after* developing a system which can operate without it.
brad
Tue, 2018-05-08 23:44
Permalink
It's a bet
I don't think LIDAR is a crutch, and neither does Waymo and they are attaining great success with it. Of course Waymo also has the most advanced neural nets and computer vision too.
One think LIDAR is very good for is not getting the false negative of the sort under discussion. You don't need to know what the lidar points standing 5 feet tall on the road are. You know to stop for them. Your vision will help you, but in the end the decision to stop (or at least slow) is right. It is true that LIDAR is not going to help as much with a small pile of debris on the road.
But the breakthrough I speak of is getting vision to 99.99999% Vision does not need to get that good to help LIDAR, but it does need to be that good to drive on its own. Because when you're not sure, you can slow, you can stop, you can ask the passenger for help, you can ask the remote control center for help. Sometimes the LIDAR resolution is poor enough that it's not sure. But it's never wrong on the question of a person standing in front of you. It says something is there.
Anthony
Wed, 2018-05-09 05:16
Permalink
Low resolution points, as I understand it
A set of points is not necessarily a single connected object. It's also not necessarily a solid object. So you can't stop every time you see some points which *might* be a solid object standing 5 feet tall on the road (especially when it's not even in your lane). Not if you want to produce an autonomous vehicle which individuals will want to buy, anyway.
Yes, Waymo doesn't think LIDAR is a crutch, and maybe they'll ultimately be proven right about it. But Waymo also seems to be building a different product than Tesla. I assume Tesla is hoping to capture the individual consumer market, which has its own unique challenges that make Tesla's approach make more sense.
It says something is there? There's always something there. The question is whether that something is a solid object capable of causing significant damage to the vehicle (or human). LIDAR can't answer that by itself.
brad
Wed, 2018-05-09 11:12
Permalink
There is not always something there
Stray points show up on lidar, but not frame after frame. Every 100ms you have a new frame. LIDAR tells you where your target is, so your vision software doesn't have to understand the whole world. You don't have to ask, "Show me all the people in this frame." You ask, "Is this specific collection of pixels a person?"
Even if you think most of the understanding is going to come from computer vision, computer vision on RBGZ (colours and depth) is more capable than computer vision on RGB.
Anthony
Wed, 2018-05-09 16:21
Permalink
You make it sound like you
You make it sound like you only have a single frame from a single camera. But in fact you have several frames, from different locations, from multiple cameras in different locations. Detecting an object in the roadway, with better than human results, is easy. The hard part is identifying what the object is, and LIDAR doesn't help much with that.
brad
Wed, 2018-05-09 22:10
Permalink
Identifying with LIDAR
Actually, LIDAR is not too bad at identifying. It can have trouble telling 2 pedestrians from 3, but do you really need to tell that difference to know not to hit either?
Pedestrians are identifiable from their speed and direction, as well as their shape and the way they have moving arms and legs. Ditto bikes. Not perfect, but reasonably doable.
And again, if there is something that could be a pedestrian, and you don't have solid confirmation that it isn't one (or some other obstacle) the right thing to do is slow or stop. You don't need to know precisely what it is. Some things that show up ghosting on lidar, like exhaust, are not found where pedestrians are, nor do they persist in the same way.
Anthony
Thu, 2018-05-10 20:37
Permalink
My comment was about LIDAR
My comment was about LIDAR "by itself." Yes, when combined with a good AI, you can make good guesses. Except when you guess wrong, which is apparently what Uber did.
In what scenario would LIDAR show something that could be a pedestrian, but cameras would not, through the point where it's too late to stop? If there's such a situation, and it's not avoidable (like driving at night with no headlights), then yeah, LIDAR is safer than no-LIDAR.
If not, then LIDAR isn't safer. It's just possibly, arguably, easier.
I think it comes down to your software. Either it's close enough to perfect that your crash rate is lower than a hypothetical human driver who always drives defensively and follows the laws of the road (you have to exclude, at the very least, drunk drivers and speeders from the average crash rate, I think). Or it isn't, and, in my opinion, your vehicle shouldn't be operated without a human driver in control of it.
If we can get to that point without LIDAR, we should. Even if we get to that point with LIDAR first, we should still try to do it without LIDAR, because LIDAR is expensive. Especially if you don't own the patents.
Anthony
Thu, 2018-05-10 20:44
Permalink
Note: Replace "follows the
Note: Replace "follows the laws of the road" with "tries to follow the laws of the road." I think that's the target. If a robocar can do a better job than the average drivers license holder who *tries* to drive the way we were taught to drive, then they should replace humans as quickly as possible. Maybe even throw in a few negligible violations, like going a little bit over the speed limit and rolling through a stop sign once in a while (when you have enough visibility that coming to a complete stop is dumb except that there might be a cop or a camera watching).
brad
Thu, 2018-05-10 21:07
Permalink
LIDAR is expensive
LIDAR is only temporarily expensive. It's just electronics. Once you make anything electronic in automotive quantities it becomes cheap. Even if it spins.
The best cars will use every sensor that makes sense, and they will all become cheap when made in quantity 100 million.
Why discard the ability for superhuman vision? While Google's first cars, like many other ones, drove entirely with LIDAR, that was in the era before lidar/camera fusion was fairly easy and deep neural networks were just experiments. LIDAR's two big advantages -- depth perception and ambient lighting invariance -- are too good to give away. (You can get the latter one with an infrared camera I guess.)
LIDAR will see things in absolute blackness, including where the headlights are not pointed. LIDAR can't be fooled by a picture on a billboard or the side of a bus. LIDAR will not care if the subject is in the dappled shadow of a tree and turned into a cloud of light and dark spots. LIDAR will not confuse a close child with a more distant adult. But don't drive with just it. Drive with everything, unless something is redundant and might just confuse you.
Anthony
Fri, 2018-05-11 17:38
Permalink
when?
By all means companies should add on LIDAR once it gets cheap enough to justify the minimal boost in safety (from human-level to superhuman-level, I guess). When it gets to that point, probably the regulators (and/or fears of "design defect" products liability lawsuits) will require them to. I think autonomous vehicles will be widespread for a long time before that, though. We won't go from 1000 to 100 million robocars overnight.
> LIDAR will not confuse a close child with a more distant adult.
You'd have to have a pretty damn stupid system to confuse a close child with a more distant adult. Even without taking advantage of parallex, you have a map of the terrain and can tell how far away the person is just by looking at how much ground there is before the point where their feet touch the ground.
brad
Sat, 2018-05-12 00:31
Permalink
One would hope
But some people are trying to build systems with just neural nets on the camera. They don't really have these concepts. They do draw a bounding box, and the software looking at that can decide that the location of the box can tell the distance to the obstacle.
But my main point is that LIDAR just never is confused about how far away something is or how big it is. Computer vision has to figure that out. Stereo peters out in value fairly soon. And of course the crazy folks want to drive without maps.
You can make mistakes. You have to go at least a million miles without making a serious mistake. LIDAR is able to do that, on the things it is good at. Computer vision is not. That might change. But it's hard to see how vision+depth won't be a superior strategy.
Note as well that with more advanced steerable beam lidars, you can concentrate your points on the things of interest in your field, and get much higher resolution 3D images of them. I outline some techniques in my patent on this. With such an approach, if you have an obstacle that you are having trouble understanding with either lidar or camera, you can ask a lidar to spend more of its points on it (within eye safety and laser heat limits) and then become more sure of what it is. You can also do this with vision of course, and in fact since most CNNs can't handle the full resolution of modern digital cameras, digital zooming is easy.
Anthony
Sat, 2018-05-12 08:39
Permalink
LIDAR
> LIDAR just never is confused about how far away something is or how big it is
I'm really not sure what you mean by that. It requires intelligence to convert a bunch of pixels in spacetime into an advanced concept like how big "something" is. That intelligence could be flawed. Especially if the resolution is low. Five pixels could be five small objects, or one big object, or something in between. Those object(s) could be pieces of plastic, human beings, birds, cars, particles of car exhaust, anything, or even nothing (any one or few of those pixels might be noise). Sure, using artificial intelligence and/or a long list of scenarios, combined with multiple readings over time, the car can determine what it's most likely to be, most importantly separating it between things that can be run into and things that can't. But that's the part that can be flawed.
I don't think you can sell a car which slams on the brakes every time it sees five pixels which might possibly be a human in the car's lane. So you have to introduce intelligence which tells the car when those five pixels are almost definitely not a human (or other object for which the brakes must be slammed on for). And that intelligence might be flawed.
brad
Sat, 2018-05-12 18:08
Permalink
Not confused
Of course, the software needs to interpret the point cloud. But when it is trying to figure out what the point cloud is, there will be no doubt about where in 3 space it is. In addition, points which are adjacent in 2-space (ie. in an image) but far apart in 3 space will never be conflated.
A car is not going to react to just 5 pixels, at least not to slam on the brakes. It might slow a bit at the sign of a stalled car out at the outer range of its sensor, just as you would. A few distant returns (what we call pixels) followed by more returns suggesting something large, distant and not moving at the speed of traffic, would be a sign for caution. As you got closer you would learn more, and brake or continue on.
There will be tuning thresholds on false pos/neg for small items like birds, pieces of plastic, leaves, trash, car exhaust. There should not be the same level of thresholds for big things -- people, vehicles, couches. A person is not going to be just 5 returns. They might be at the outer range of your sensor, but as they get closer they will be scores and hundreds of points, combined with distance information to feed into your camera image, and you will not have a significant chance of confusing the person for a false positive.
Anthony
Sun, 2018-05-13 05:52
Permalink
Hundreds of points at what
Hundreds of points at what distance, with which device?
If it turns out that Uber *did* confuse a person for a false positive, I guess the chance *was* significant.
brad
Sun, 2018-05-13 14:57
Permalink
Depends on the device
For the Velodyne 64 it claims 0.09 degree resolution but I think it's not quite that good. At that resolution, a pedestrian subtending 2 degrees wide could return 20 points per line, probably on 10-15 lines on the 64 line model, double that on the 128 line model. That's at 60 feet away. A more foveal LIDAR that concentrates points could do it much further away. 60 feet is not far enough to stop at 40mph of course, but one could slow quite a bit, or swerve if the road is open. However, you identify things well before then.
Mårten Thornberg
Wed, 2018-05-09 10:13
Permalink
What happens if the map data is outdated
How does a car that depends on detailed map data handle situations where the map is outdated? E.g. one morning there might be road maintenance being performed in one of the lanes. That is fairly common situation.
brad
Wed, 2018-05-09 11:08
Permalink
When the map is wrong
https://ideas.4brad.com/robocars-driving-when-map-wrong
Kivi Shapiro
Wed, 2018-05-09 19:09
Permalink
Sending a petabyte across town
Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
brad
Wed, 2018-05-09 22:11
Permalink
That is indeed the old idiom
That is indeed the old idiom I was referencing. But now it's 8TB hard drives instead of tapes, and it's a robot.
Jan
Sun, 2018-05-13 06:19
Permalink
Dear Brad, thank you for this
Dear Brad, thank you for this post. It is very impressive and I mostly with your points. I also went to read your post about mapping in december 2014, which is still valid today. Great work.
Now, I am generally a big fan of Tesla, but I agree with you, that they are taking a huge bet, not relying on LIDAR for the forseeable future. On the other hand, I can see the financial constraint they have at the moment. But even with these constraints and the decision not to use depth i.e. LIDAR at the moment, I feel that they still have more potential than they are using currently. I am referring to mapping. Why are not take advantage of mapping more than they do?
Sure, the most precise mapping is done with LIDAR tech, but it is still possible to do with cameras. There are a couple of companies (for example lvl5.com, Mobileye) base their business model on mapping with vision only. And this mapping information could be used also by cars that use only vision to locate themselves and show the most frequented path forward. So I don´t understand, why they are not using it. I mean this approach worked for a long time while they had the collaboration with Mobileye (we know they were using vision based mapping successfully). Why do I think, they are not using it?
1) The recent deadly crash in California when a Tesla on Autopilot mistook lanemarkings and crashed head-on into a divider despite of this being a very frequented road with lots of Teslas passing each day. Could have easily been prevented by mapping.
2) Lots of videos on Youtube of Tesla Autopilot getting confuse once the lane markings are not clear, on crossroads, at curbs, on sharp curves.
3) A recent video even shows a Tesla on Autopilot shifting to the left side to the lane of the oncoming traffic just like a ghost driver. Would be unthinkable if mapping was used.
There must be a rationale behind not using HD mapping for Tesla, but I can´t get my head around it. Maybe it has to do with cost (as you mention above, although not in the financial sense).
So you say that mapping is expensive because of
- cost of evaluating the mapping data by a human. But if you say, you require minimum 5 passes of the same road by different cars as a basis for a proper map and only those spots of the road need to be revised, where the mapping data differs significantly between those 5 passes or where the driving path differs, than it should not be so expensive to do. Or you wait for these "vague spots" to be smoothed out by the next 10 passes...
- cost of the mapping vehicle with a driver inside. This cost is zero for Tesla, as it would be mapping while the driver is driving anyway. This would also quickly and automatically extend the scope and volume of the map.
The only reason I could think about is that Elon is extremely convinced, that his first-principles approach of betting on the rapid advancement of neuronal networks dominates the whole effort and that the whole processing power of the system is focussed on this. In other words, to do mapping and advanced vision based driving simultaneously could be too expensive in terms of processing power for the Nvidia GPUs built into the Tesla cars. But again, as it has worked quite well in the past with Mobileye, I am really puzzled here.
Would love to hear your opinion about possible reasons for the Tesla strategy, even if it probably is speculation at this point.
Thanks, Jan.
brad
Sun, 2018-05-13 14:45
Permalink
Won't speed to Tesla's motivations too much
Since I am not there, but they began with the goal of seeing what they could build in today's production automobile. LIDAR is not an option for a vehicle today.
I really do suspect there is a fair bit of Google envy out there. As in, "Google is doing it this way, Google is really really smart and started years ago, so if we do it that way, they will beat us. So to win, we have to try a different approach that they are not doing. If that works, we rule."
Add new comment