Nvidia simulator and Safety Force Field and other news from GTC


This week I am at the Nvidia GPU Technology Conference, which has become a significant conference for machine learning, robots and robocars.

Here is my writeup on a couple of significant announcements from Nvidia -- a new simulation platform and a "safety force field" minder for robocar software, along with radar localization and Volvo parking projects.

News from Nvidia GTC


The radar localization looks to be a great addition, I'm surprised it is not already part of localization, but then maybe localization is already pretty good except in circumstances like you mentioned of snow on the road or fog. Obviously physical location is not all of localization , lane markings can't be treated as though they are permanent fixtures so LIDAR/vision is needed to locate within a lane.

Selling/booking car parking spots is something I've thought about in passing before. We have a minor product called SignUp which reserves sessions on workstations in public libraries and university labs (you are automatically logged when your session is finished if someone is booked following you). I suggested that carparking might be similarly suited to a booking system.
If local councils are not keen on people selling their slots there are plenty of privately owned car parking lots/buildings who could run it as a service.
Such a system would seem to be most useful for irregular trips such as a large sporting/musical event. People might be happy to pay a premium to have a guaranteed park, without the stressful 15 minute hunt in heavy traffic. Congestion presumably could be lessened as people drive directly to their destination. Obviously integration with GPS/Google Maps etc would be a requirement.
Once all know carparks are full the area could be restricted to residents and mass transport.
Hmm while writing this I get the feeling you've already done this topic a few years ago :)
Interesting also your Nvidia shareholding.
Based only on driverless vehicles (including deliver bots) I bought some of the following a few years ago Nvidia, Google, Amazon, Boeing, Alibaba for a 20 year hold.
The first 2 for obvious reasons, Amazon and Alibaba because I think even if delivery bots enable mum and dad stores to compete with Amazon, I think Amazon and Alibaba will get in there early and big. Boeing because I think any advances in ground traffic convenience will actually make non just air travel, but also tourism more attractive. Time will tell.

There are already many thousands of parking lots that will let you reserve space using a wide variety of parking apps.

Of course, for lots, they would rather not be full. That means they set their price too low. They would rather be almost full at a high price than full at a lower price.

This is why I think Tesla has one big advantage over most of the other companies. They have billions of miles of real world data from a wide range of locations, times, conditions, etc.

It's highly unlikely they're actually using that data as well as some other companies would. But without that amount of data I don't think the problem is solvable except in tightly geofenced locations.

Tesla isn't alone there, though. Some other car companies likely have lots and lots of data, but just don't brag about it as much.

It is often said Tesla has that data, but quite a few bits of information suggest that they don't. That Teslas are not transmitting this sort of data back to HQ (people have put sniffers on the connection) and they aren't using it. They do get data on accidents, but not things like video, at least as far as those trying to figure it out believe.

They're surely not sending back live 24/7 video of every mile, if that's what you're asking.

I'm not sure what the value of post-perception testing is. What exactly are you testing? The basic logic is simple. Don't crash into anything. At least, not anything significant. If you had data on how humans/animals/traffic-lights/road-hazards/etc behave you could make a good post-perception test, maybe. But if you had accurate data on how things behaved, you would have coded the logic correctly already. If there's a 0.0001% chance that there's a human standing in your lane (or about to enter your lane), you do something to avoid crashing into them. The trick is figuring out when there's a 0.0001% chance that there's a human standing in your lane (or about to enter your lane).

Pre-perception is the way to go. Specifically, using actual recordings of real world pre-perception data. And with that, you don't just test "will I crash into anything." You test whether humans are recognized as humans and bicycles are recognized as bicycles and how early they are recognized. You test whether the system correctly predicts what routes the "objects" are going to take. When things happen that your system predicted had zero chance of happening even though it should have predicted there was a chance of it happening, you investigate why.

Perhaps most importantly, you are constantly feeding new recordings into the system. You can run old recordings as a regression test, but regression tests are not an indicator of how reliable the system is. As you correctly point out, engineers will simply fix all the problems against a static test until they perform perfectly. (But they'd no doubt introduce new problems while doing so, unless you are constantly feeding them with new, real-world tests.)

The large car manufacturers have a significant advantage here. But I think I can see Waymo's strategy, and it might work: Build thousands of cars. Deploy them, with safety drivers, as a taxi service in strategic areas (read: Arizona) at a price that can compete with Uber/Lyft without losing money too quickly, and then gradually expand from there.

Uber has 18,000 contract drivers and 1,000 employees in Arizona. If Waymo can get 2,000 cars that do 40,000 miles a year, that's 80,000,000 miles/year of real world data.

Is that enough? Probably not. I still think Tesla has the better plan. Get people to collect data for you for free. Tens of millions of miles a day. Unfortunately (for them, and for society), that means foregoing LIDAR. But when and if LIDAR becomes affordable (and it's probably inevitable), it'll be much easier for Tesla to install LIDAR in the 1,000+ cars/day they're producing than it will be for Waymo to ramp up production from 10 cars/day to 1,000+ cars/day.

The perception and post-perception systems are fairly divisible. And the post-perception art is quite involved. The perception part I refer to is the system that distills raw sensor data into a simpler form. After that, you have the systems that try to make meaning of it, and assign predictions to all obstacles and where they are going. And from that you have the path planning and execution of the plan.

But there are two main kinds of errors. The perception system can fail to segment its view of the environment or identify an object, and you do a lot of testing of that to make it better and better. But it's only modestly useful to test how good it is at identifying simulated things. You really need to know how good it is at identifying real things. There's no substitute for getting out there and capturing real sensor data of the real world and seeing how well the perception system figures out what it is.

The processing after that is fairly orthogonal. Not perfectly orthogonal, so you will do some full pre-perception simulation. The most important thing there is dealing with the way the view changes when you move in reacting to things in the world.

But it's key to test what the car does once it has seen (imperfectly) what's out in the world. That's why your simulated perception system won't report the precise ground truth for most of its test, though it will for some. You will set it to make mistakes of the sort your perception system makes. Missing objects. Misclassifying objects. Getting the wrong location for objects. Winking in and out on objects. Complete sensor shutdowns. Noisy response etc.

Then you get to test things like what you will do when you encounter situations like cut-ins, surprise pedestrians, and literally a million different traffic patterns. You can do this orders of magnitude faster, so even if you decide it's not as useful, you are still going to be doing much more of it.

But they both are useful. But teams are testing their perception system on real world data all the time; that's what all real road driving does. What they are not testing all the time is how the car reacts to rare situations on the road.

Teams that are all vision based have a different view, since most of their questions today are about if their vision perception system works. LIDAR tools pretty much always detect there is something there (once it gets close enough) and their concern is more about figuring what it is and where it's going.

"There's no substitute for getting out there and capturing real sensor data of the real world and seeing how well the perception system figures out what it is."

Absolutely. But the car that captures real sensor data of the real world doesn't have to be running the same software that you're testing.

"LIDAR tools pretty much always detect there is something there (once it gets close enough) and their concern is more about figuring what it is and where it's going."

I guess I'm confused what "post perception" means. Figuring out what it is is perception, isn't it?

Figuring out where it's going isn't exactly perception. In many cases figuring out what it is is equivalent to figuring out where it might be going. In the case of adult pedestrians, perceiving things like "gait, body symmetry, and foot placement" (https://economictimes.indiatimes.com/magazines/panache/self-driving-cars-get-smarter-they-can-recognise-and-predict-pedestrian-movements/articleshow/67975471.cms) is crucial to figuring out where it might be going.

More importantly, no simulation is going to be useful for testing this. Only real world data on how things move can help here.

You can simulate at many levels of perception. Pure post-perception simulation means the simulator hands you the same things your perception engine did. "We detect a pedestrian at these coordinates with this confidence." Possibly "moving this direction at this speed." Or it may ask you to figure out the direction and speed. Or you could have the simulator segment the sensor data and say, "We detect this group of points or that group of pixels, you figure out what they are."

The main point is that since it's less valuable to test your segmenters, recognizers and classifiers on synthetic data, you bypass that, and it saves you immense amounts of time so you can simulate vastly more miles.

In what way is it useful to test using synthetic data at all? Shouldn't you pretty much always use real world data?

I guess another way to ask this is, how do you know if the car did the right thing in response to the simulation? What are you testing?

Well, any pre-perception simulation (other than a replay of logs) is synthetic data. To my surprise, people are even getting advantages training with synthetic data (for how they perform on real world data.)

You have to use simulators for some things, like accidents.

"other than a replay of data"

Replaying data, either pre- or post- perception data, can be useful. Much more useful than synthetic data.

It's also possible to replay data from accidents. These can be accidents that happen accidentally. Or it can be "accidents" (crashes) that happen intentionally (with crash-test dummies, for instance). Admittedly, this is data that's hard to come by (in the first instance) or expensive to come by (in the second), but it's also much more useful than synthetic data.

That said, synthetic data will be more useful once we've figured out how to build a self-driving car, to test the designs of other cars, as part of building that first self-driving car is figuring out how the real world (particularly human/animal controlled aspects of the real world) operate. It's easy to realitically simulate basic physics. It's not easy to realistically simulate human behavior. And testing a car against an unrealistic simulation decreases safety.

I should also point out that a test using replay of data has to be done right. In some instances you might have to let the car know that you're using replayed data. Otherwise, you're teaching it the wrong thing to do when, for instance, the brakes fail. Another alternative, which is trickier but usually better, is to give the car the replayed data and not ask it what to do, but rather ask it what it's "seeing" and what it predicts will happen.

Again, the logic part is fairly simple. Don't hurt anyone. Don't damage things. Follow the law. Get to your destination. (And don't destroy humanity, which of course is the 0th law.) The car itself will mostly follow simple rules of physics. The tough part is figuring out what the stuff outside the car is, is going to do, and will affect the operation of the car.

Replay of data can be good for testing perception of course, and that the planning system makes the same decisions as it did before (Regression testing.) However, as soon as the system decides on a different plan, it's hard to run replayed data because the sensor data does not show the car responding as the computer is commanding it to.

I saw one project that uses an interesting approach for vision data. They capture a wider field of view than needed, and so during playback, they can tolerate the vehicle pose and other factors deviating a little bit from the original recording, by approximating the new camera view. Obviously this has limits but it apparently has some value. You can also do this to some extent with LIDAR.

But for a true simulation you need to re-render so that the sensor data matches what happens when the vehicle steers/brakes etc.

I guess I'm not sure why you need to use a "true simulation." Is it just because it's easy?

I guess in the case of a government mandated test, you have to have something relatively easy. But for actual development, you're not just testing the decisions. As I said above (several times, in several different ways), the decisions are easy. Don't crash into anything. (#)

Perception is the tricky part. Prediction is the tricky part. Recognition and classification (if you don't consider that part of perception) is the tricky part.

It's not too hard to run replayed data and ask the car what it sees and what it predicts is going to happen. Adding in what it would do in such a situation would be a little bit more work, as it requires a bit of counterfactual "reasoning" (the car has to "pretend" that the controls are responsive) but it's probably worth it to build that bit of counterfactual reasoning. And it's probably not too hard. Temporarily shut off any parts of the software that react to the controls not being responsive. (Unless you're testing those parts of the software! Maybe simulation is the best way to handle that, as you're not going to easily get much real world data about what happens when the brakes or the power steering fails.)

(#) Consider the Uber fatality. There was an error in the decision. The car recognized that it was about to crash into someone and didn't hit the brakes. But that was, in essence, an intentionally introduced wrong decision. You didn't need testing to detect that error. The tougher to fix errors were ones of perception (the car didn't classify a pedestrian as a pedestrian) and prediction (the car didn't predict that she was going to move into the car's lane until she actually did). Synthetic data simulations aren't highly likely to detect these errors. It was a scenario (woman walking her bike across the street at night outside of a crosswalk) that no one thought to program or test.

Or consider the Waymo crash with the bus. The problem was prediction. Not decisionmaking. (You could argue it was decisionmaking, by arguing that the car should always yield even if it predicts there's zero chance the bus won't yield. But that would probably be overly cautious, and a synthetic data test isn't likely to catch that unless the synthetic data test completely ignores human behavior and simply has all other vehicles act randomly, which would promote less safety.)

Perception is very important, and you test it as many ways as you can. Decision making is not easy. Consider the reports people make about cars waiting too long to make unprotected lefts, having trouble doing merges, deciding what to do in unusual situations. Figuring out how to drive in places you must be aggressive to make progress. These are all matters of the planner. The perception is presumably working just fine. Likewise when it comes to mitigating and avoiding accidents. In Uber's fatality, clearly their perception system needed improving, but so did their evasive moves.

There's tons to test with decision making and path execution. And tons to test with perception. But you can do quite a lot testing them independently. Perception is best tested on real world data. Motion plans are well tested on real world situations but those can be imported from the real world and synthesized.

It's much more common, I will wager, for a car to encounter a situation it has never seen before than to encounter a road obstacle it has never seen before. And for the latter, again you want real data. Perception problems include things like figuring out if it's a bird that you should not slow down for, or a baby you should brake hard for. You can only do that with real data, but you also can't stick a live baby on the road, so people still simulate perception tests.

How are you going to fix the problem with unprotected lefts and trouble doing merges with synthetic data?

If your perception is perfect, and you know exactly how other cars (and pedestrians) behave, then the only thing you need to know is how much risk and law-breaking is acceptable. From there the task is simple.

Part of the problem with unprotected lefts in particular is that humans take risks, and break laws, that a robocar company probably wouldn't. A test isn't going to resolve that. Either people need to accept that robocars are going to be *much* more cautious than humans, or people need to accept that robocars are going to (rarely) cause injuries and deaths in order to (significantly) cut down on driving time. (Personally I'd argue for increased safety. I can deal with slower, safer rides in a robocar.)

Another problem is with prediction. It's a guessing game whether or not the pedestrian on the corner is going to cross in front of your unprotected left, whether she has the right of way or not. Tests can help with that somewhat, but only if you're using real data. Using synthetic data just gives you the *wrong* information about how often a pedestrian is going to do that. Using synthetic data *causes* the car to be either too safe or too risk-taking.

So, even in those two situations, I don't see how a test with wholly synthetic data can help.

As for evasive moves helping with the Uber crash, no, evasive moves were disabled. As I said earlier, yes, the decision to not hit the brakes was a mistake. But it was an intentional mistake.

For unusual (really, unexpected) situations, wholly synthetic data won't help you. If you don't expect it, you won't test for it. Real-world data will help a lot, on the other hand. With a few billion real world miles, there's very little that you haven't already encountered. (With a few tens of millions, especially if it's all in Arizona, there's a lot.) If you can shadow drive for 5 billion real world miles without encountering any significant, unexpected situation, you've probably built a robocar. If you can't, you probably haven't.

For all of it you want real world data. Maybe you mix real world data with synthetic data. But if it's wholly synthetic, I don't see the point.

I strongly believe that shadow driving is the way to go. Not only does it give you data to test against, it also gives you data on how humans have solved problems like protected lefts (and four-way stops, which Google learned early on are not handled in the real world the way they're handled in the motor vehicle statutes). Shadow driving is how humans learn how to drive at level 5. It's how robocars will learn how to drive at level 5 as well.

The problem in a merge or unprotected left is not that you don't see the other vehicles and identify where and what they are. The problem is you're unsure about what they will do. So you can create better models for a merge, and try them out in post-perception simulation, including post-perception simulations of vehicles doing all sorts of things you don't expect.

And when I say "post perception" I really mean "post classification." Watching a vehicle as it weaves, turns, puts on blinkers, slows and speeds up is part of perception, but it's done after the sensor data is turned into an object list.

You can observe a lot of drivers and build models of how they respond too, and then use those in your simulation. You will have a model of a driver who is aggressive in the merge, or timid, at different thresholds. You can learn how to spot them and how to deal with them, and test it and see what works.

I am not talking bout the exact Uber car in the Uber crash. I am saying how simulation of that, post perception, can help you learn what sort of evasive moves make sense, so that you could implement them. That they didn't trust their system enough to allow them is orthogonal.

Also, I am thinking more about swerving, which also would have worked there, even at the point where braking no longer could, since the road was empty. Swerving of course comes with risks -- the pedestrian might also try to evade and possibly move the same direction. You can test all this in sim.

The Uber event was not an unexpected or unusual situation. In fact, one reason we're all so upset at Uber is it's perhaps one of the most basic and expected types of pedestrian impact situations!

But again, there are two types of unexpected here. One is a shape you've never seen before and can't classify as to what it is. The other is motion of that object in a way you've never seen before. They are connected, but rather loosely. Something that looks very much liked a pedestrian or car can easily move in some unexpected way. Something that you can't identify might move in a straight line. (Of course, sometimes how it moves is part of how you identify it sometimes which is why it's never perfectly simple.)

This is why pre-perception simulation, while not useless, often does not justify its much higher cost. All you're doing is testing if your classifiers can identify something you've seen before (because you drew it.) It is true that you can do synthetic things like rotating and articulating and changing what you draw to find things you have never seen, and that's useful -- but only to a point.

Aerospace/DoD/FAA Simulation
Michael DeKort on Medium promoting Dactle

Another possibility is to mix real data with synthetic data. For example, mix the real behavior of a pedestrian crossing the street outside a crosswalk with synthetic data of everything else. That solves a lot of the parts about the car not responding as the computer is commanding it to, although it does make even the real data a bit fake because the pedestrian probably would have responded differently if the car had responded differently.

(I still think the important thing to monitor is what the car is recognizing and predicting, though. Presumably it's going to be smart enough to hit the brakes in any case. (Unless Uber executives are involved. :P) The question is, at what point does it recognize the situation? At what point does it predict that the pedestrian is in fact about to cross in front of it?)

why are there no leaks from OEM developers or insiders about perception errors or failures during non-public testing? Does not need to be corner cases, just outright miss. Does the term true redundacy needs defining? One never hears about total scope of difficulties of all sensors together in testing.
Is fusion, more so than redundancy, the bigger issue?
If Mobileye speaks of 100% surround vision plus REM plus RSS approach as viable, could that solution be closer to 99.99999% than a fusion system at this time.
The cost factor need not be factored into discussion.

With 1.5 billion vehicles currently on roads per Nvidia and VIO near 12 years, estimating up to 10 years for L4 vehicles as per MIT talk this week by Dr. Shashua until L4 totally unleashed outside of geo-fenced or pre-mapped spaces for all roads, with cost scaling and affordability implied, makes one scratch the head.
One can talk looking over horizons, maps, compute power, AI in algos, open source, simulators, new sensors, datasets, scaling of costs be it original or repair, development costs, security, regulatory bodies, legalities, geography. weather, public acceptance, ... could list gazillion more ..., have not heard anything insightful other than it's about $$$.

the executive committee of new Daimler and BMW collaboration on AV will pick suppliers
Guess will know soon

Is the AV dilemma ultimately about the transfer of legal responsibility from a human driver to another actor like AI or algorithm actors/hw-sw actors, aggregators like car mfgs-suppliers, etc., which all are governed by select human input?

How and when does the transfer begin?

As AV dev pivoting to new partnerships to accommodate the complexity and cost of these emerging platforms, and independence may no longer exist due to legal more so than development costs, are there such vectors that can be labeled as the safe standard for regulatory bodies without the realm of vested statistics.

Is competition truly an arbitrator?

Is collaboration truly an arbitrator?

Is cost truly an arbitrator?

Are the vectors of standardization for safety testing better served by first focusing on legalities?

I am afraid open-source mantras do not have superiority in reducing randomness or the infinite scope of possibilities.

All that changes are timelines.

Everybody new the field always talks about liability like it's a complex or big issue. It's actually pretty boring. Everybody knows that liability will fall upon the developers/vendors/operators of robocars, not on their passengers or even occasional drivers. Several major companies have declared it directly, the rest all know it.

So the only issue is how to make the vehicles as safe as possible to reduce the total liability. There are some issues at making sure that liability in any given accident does not go crazy out of whack with where it is today. But, just as it is today with insurance, the cost of the liability will be bundled in with the cost of the vehicle or ride.

energy versus time

Is Hyundai Mobis announcement today at KINTEX an another endorsement of cameras though possibly a loss for Mobileye.

Intel-Mobileye camera-vision only approach development budget would be interesting to know.

Greeters new commenter. You're posting all sorts of comments that contain language from the field but don't seem to say very much, or just quote articles. I hope you will move to saying more concrete things (and also adopting a pseudonym instead of "anonymous".) If this is some sort of spam test, know that meaningless posts with links just get deleted or blocked.

does new Intel EyeC radar development project in automotive alter economics of ADAS or SDV sensor suites?

Localization technology with data collection requirements built into every vehicle starting at the earliest possible date would do more to accelerate safety and quality of life than one can imagine. And the technology pays for itself and then some. Why this concept receives little attention is a shame.

The global real-time map could be built and up to date while the AV tech is maturing.

Governments, municipalities, corporations and citizens access benefits immediately that are not debatable.

And parking spaces or signage are low on the benefit list.

Available ADAS bundled with localization alone would save lives.

AEB, ACC, FCW, IHC, LC, LDW, LKA, LC, TJA, TSR, etc all entering a zone or condition known for certain nuances act on past intelligence as well as present.

And I have not even left the ADAS arena for other benefits, but can :-) and will.

Localization with data and currently available ADAS is the assistant that is deliverable as we speak that best serves the largest population.

The 2nd LC above should be PCAM.

ANONYMOUS if Carmera localization data. or Intel REM data, or Netradyne data provides other non-exclusive or exclusive insight [if applied as you claim] beyond the SDV localization including trajectories, road features, the physical environment, and obvious (eg. crest of a hill) this revelation is obscure despite reading your post "DoD" post several times.

SAIC Kotei Big Data HD map is well along in China, and SAIC does use Intel REM as one of the data sources. SAIC multi-source acquisition process also developing with DeepMap. SAIC will have the complete highway network in China covered plus 33.000km in urban environments by 2019 per OpenAutoDrive Jul 2018 meeting.

I look forward to the Toyota Carmera PoC primarily to see the how the camera vision technology distinguishes itself.

1.4 billion people
200 million private vehicles
22.5 million cars est. sell in 2019
possibly 150,000km of expressways
Toyota sold 1.47m in 2018

ID Roomzz by VW will "launch initially" in China in 2021 with Level 4, IQ.Drive. Does China lead the SDV space for most L4 production model series on the motorways in 2021.

“In 2021, we will put a pilot fleet of 500 BMW iNEXT vehicles with Level 4 and 5 functionality on the roads. The necessary technical requirements and changes to international regulations and liability laws are currently in progress.”

Furthermore, the BMW boss also said that the next-level technology for autonomous driving will arrive "past 2024", a fruit of the collaboration between Dailmer and BMW.

BMW is working on creating different levels of autonomy that can be purchased by customers, as the biggest hurdle for the Bavarians is government regulation, not technology. At the introduction of the production iNEXT, the car will have a Level 3 autonomy.

Intel states today software on target for finish by end of year.

"Software development is expected to be completed by the end of this year."

What would be result of a Waymo and Intel alliance ?

What would such an alliance yield as an advantage to such a combination beyond the obvious?

I doubt Waymo thinks that MobilEye has anything they don't have, other than relationships with automakers.

Governor Cuomo said. "I met with Mobileye today to discuss the possible application of their technology for New York's mass transit system. The MTA spends millions of dollars on navigational tools, and we want to look beyond the handful of companies who essentially have a monopoly on the rail system to develop a navigation program capable of supporting the 21st century transit system New Yorkers need and deserve."

Intel Capital Israel's Managing Director Yair Shoham, who joined TriEye's board, added: "As the automotive industry transitions to autonomous driving, demand for sensor technologies is expected to grow rapidly. TriEye technology has the potential to enhance traditional camera functionalities by increasing performance in low visibility conditions in a way that complements vision-based camera sensor technologies. Intel Capital is delighted to support the TriEye team as it works to deliver on its vision."


Intel, in collaboration with 10 industry leaders in automotive and autonomous driving technology, today published “Safety First for Automated Driving,” a framework for the design, development, verification and validation of safe automated passenger vehicles (AVs). The paper builds on Intel’s model for safer AV decision-making known as Responsibility-Sensitive Safety (RSS).


This publication summarizes widely known safety by design and verification and validation (V&V) methods of SAE L3 and L4 automated driving. This summary is required for maximizing the evidence of a positive risk balance of automated driving solutions compared to the average human driving performance. There is already a vast array of publications focusing on only specific subtopics of automated driving. In contrast, this publication promotes a comprehensive approach to safety relevant topics of automated driving and is based on the input of OEMs, tiered suppliers and key technology providers. The objective of this publication is to systematically break down safety principles into safety by design capabilities, elements and architectures and then to summarize the V&V methods in
order to demonstrate the positive risk balance. With Level 3 and 4 automated driving systems still under development, this publication represents guidance for potential methods and considerations in the development and V&V. This
publication is not intended to serve as a final statement or minimum or maximum guideline or standard for automated driving systems. Instead, the intent of this publication is to contribute to current activities working towards the industry-
wide standardization of automated driving.

Conclusion and Outlook
This publication provides an overview of widely known safety by design and verification and validation
(V&V) methods of SAE L3 and L4 automated driving, thereby creating a foundation for the development
of automated driving solutions that lead to fewer hazards and crashes compared to the average human
driver. An initial step involves deriving twelve principles from the overall goal of achieving this positive risk
balance. Based on the twelve principles for achieving the positive risk balance, this publication devises
a possible overall structure for the safety by design and the validation and verification process.
This publication establishes traceability between the top-level twelve principles and specific existing
elements by introducing capabilities of automated driving based on the three dependability domains of
safety by the intended functionality, functional safety, and cyber security. A generic architecture is outlined
to connect the elements, and the architecture of the development examples is discussed. The suggested
V&V approach combines safety by design and testing with the main strategies applied in V&V to solve the
previously described challenges. The strategy addresses the key challenges, but it also shows that it is
impossible to guarantee absolute safety with 100% confidence. Thus, there will still be some residual risks.
Field monitoring is obligatory in order to iteratively learn and improve the systems.
In addition to the two main pillars of safety by design and V&V that underpin the twelve principles, Appendix
B proposes a framework for a deep neural networks (DNNs) safety development. DNNs can be used to
implement safety-related elements for automated driving. The framework includes recommendations for
the safety-related design and artifact generation for each of the following three phases: The definition
phase, the specification phase, and the development and evaluation phase. In addition, guidance is given
regarding the deployment management aspects of a DNN with an emphasis on real-time field monitoring.
Further steps include requesting feedback on this publication from all over the world in order to further
develop this publication into an overall valid and accepted international standard.
This is not a one-off publication but should be viewed as a first version. The next version should expand the
V&V process to include defined solutions with the necessary detail. This could be described via confidence
levels and a combination of various testing methods and test results. The next version is intended to be
put forward as a proposal for international standardization.


One would think REM mapping would be used in these combined platforms.

"...It will be easier to develop laws and regulations governing a fleet of robotaxis than for privately-owned vehicles. A fleet operator will receive a limited license per use case and per geographic region and will be subject to extensive reporting and back-office remote operation. In contrast, licensing such cars to private citizens will require a complete overhaul of the complex laws and regulations that currently govern vehicles and drivers. ...

Our ADAS programs – more than 34 million vehicles on roads today – provide the financial “fuel” to sustain autonomous development activity for the long run. ...

Build on our Road Experience Management (REM)™ crowdsourced automatic high-definition map-making to address the scale issue. Through existing contracts with automakers, we at Mobileye expect to have more than 25 million cars sending road data by 2022...."


Esri said its customers will be able to visualize and analyze real-time HD map and location data streaming from vehicle-based sensors, enabling a new type of dynamic map on the Esri platform. Mobileye will begin publishing data into the ArcGIS platform over the coming months.


BMW and Mobileye collects 1.5 billion km of data per day via fleet to build maps per Amnon Shashua interview at TechCrunch interview. The VW and Ford contribution via REM builds rather large data pool when ramped up and WITH "3 more contracts underway" beyond BMW, VW, Nissan, and Ford, lots of REM data for OEMs to exchange if so desired. Sneaky suspicion Daimler could be one of the other "3".

At 20,000 miles per year, REM harvests about 200mb per year at a transmission cost up to the cloud of less than $1.00. (10kb of data per km).

Mobileye hope's to bring MaaS to US in 2021.


Is Aptiv the first Tier 1 from overseas? Aptiv bases in Shanghai with SAIC, NIO, and BMW.

Of Pony.ai"s workforce of 80, over 1/3 hail from Tsinghua University,

SDV tests for Pony: "demonstrating 39 capabilities across six categories of tests, and completing 10 days of “holistic” safety and operations evaluations."

"The complexity is good for effective data collection. For example, we have much more data related to bicycle riding in China”

China license plates registry for ICV (intelligent-connected vehicle) needs a public website.

April 18. China Builds Site to Test Autonomous Cars in Highway Conditions
26-kilometer-long testing site in Shandong province.

According to data released by ["www.autoinfo.org.cn"] before January 5, there were a total of 101 licenses for autonomous vehicle road tests having been issued in China to a total of 32 companies related to self-driving technology, including Internet firms, OEMs and car-sharing platforms, across 14 cities in China.

Baidu and Pony.AI lead in mileage.

> Baidu obtained over 50 licenses, > > most OEMs hold no more than 3 licenses per company.

Beijing with 60 licenses, followed by Chongqing and Shanghai who had gained 8 and 11 licenses respectively. ????

Beijing 60 licenses to 10 cos., 44 routes 123km, tested mileage exceeding 153,600km by Dec 2018.
10 cos. Baidu(Ford), Tencent, IdriverPlus, Pony.ai, DiDi, NIO, BAIC BJEV, Daimler, Audi and ,

Shanghai, first were SAIC Motor and NIO. Baidu, Audi, BMW, Mercedes-Benz, BAIC BJEV,TuSimple, Momenta Pony ai., Panda Auto, . 37.2-km-long road.

Chongqing with complex terrain, has 7 cos Changan Automobile, Baidu, FAW Group, Dongfeng Motor(PSA), GAC Group, Geely(Volvo), Foton Auto and NEV rental platform Panda Auto.

Changchun - FAW

Shenzhen - Tencent

Fuzhou - Baidu, King Long

Baoding - Baidu

Changsha has Baidu late 2019

Tianjin - Baidu

Wuxi - Audi/ SAIC

Samsung patent was published in late April in Europe revealing their SDV work, invention covers a movement prediction system of many types of target moving road entities.

Bloomberg reported in Oct 2018 that Intel started work on internal radar and lidar in Q1 2018 stating "Six months ago it repurposed some of Intel’s photonics engineers and tapped Intel engineers previously working on micro-radars ... and put them on creating digital radars for a future generation of Mobileye chips ... "There is lots and lots of potential for building the next generation sensor way beyond the current science of lidars” said Shashua.

The lidar s/w stack appears led by
Anna Petrovskaya, November 2018, after Intel bought company she co-founded.

Localization and mapping IP are in her domain, but I see no breakthrough in patents so far from Intel except teleoperators headset maybe, but that is future, not present as you state, and no where near any aid to ADAS localization.

Disregard that REM and RealSense seemingly are merging together along with radar and lidar dev., where is a breakthrough with existing tech coming from if as you state "existing" as that implies s/w or partnership, or policy etc..

I can see Great Britain adopting RSS though, though Netherlands not yet.

Not sure if Daniel D Ben-Dayan Rubin is Senior Research Scientist for RSS but could be and see nothing in his patents.

Intel's discussion in Vision Zero doc of the L2+ With RSS, REM, and surround vision ["require a surround (camera-based due to the required resolution)"] is not explaining much and no help.

ST stated EyeQ5 SDK and applications should have shipped in H2 of 2018 in news release, yet feature set of s/w unreported ? Hard to believe all that is known is a Linux build of some flavor supporting basics like adaptive Autosar.

Toyota's partner Cortica's CEO states there is no one close to Mobileye, though Cortica states it will catch them in future. Not sure what he means though "no one close to".

Still, no breakthrough can I see in hardware based ADAS and localization and mapping other than an un-announced Vision Zero L2+ ADAS capability.

Apple patenting radar for road surfaces, and Samsung stealing a page out of the Gary Marcus AI debate of object symbolism in addition to AI makes sense. GM patenting retrofit of SDV technology is nothing.

Hitting 99.999999 in object detection would be close though.

Musk takes shot at anyone using Lidar as primary sensor, as well as Lidar HD maps. (Apple?)

LIDAR is a fool’s errand,” Elon Musk said. “Anyone relying on LIDAR is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”

“LiDAR is lame,” Elon replies to a comment about ‘slam’ on the tech during the presentation. “We’re gonna dump LiDAR, mark my words. That’s my prediction.” LiDAR in cars is “stupid”.

“You only need radar in the forward direction because you’re going really fast,” Musk

False and foolish = HD Lidar maps and LiDAR – the final word from Musk on two no-go’s for FSD cars. Musk

Why is radar okay, then?

The problem with lidar is that it's expensive.

good question [disregard investor day umbrella]

have an opinion though.


Neural Networks, Data,

oops. was in a hurry. post should have been

have an opinion though on Musk's and Kaparthy's "incrementalism" and "Neural Networks, Data" thoughts around Lidar

“We have to solve passive optical image recognition extremely well in order to be able to drive in any environment and in any conditions,” Musk said. “At the point where you’ve solved it really well, what is the point in having active optical, which means lidar? In my view, it’s a crutch... that will drive companies towards a hard corner that’s hard to get out of.”

Tesla patent for localization (similar to Intel approach)

/Technologies for vehicle positioning/

Application Number: 15610785 Application Date: 01.06.2017
Publication Number: 20180364366 Publication Date: 20.12.2018

Lidar commentary. Link posted only because of opin in page.


Who will manufacture Lumotive lidar

Intel may NOT be developing Lidar, but a rather unique radar.

And sensor failover or redundacy is still not a subject of conversation, though EMusk speaks of compute failover.

Never heard if EM has ever provided any steering wheel versus remote teleoperator opinion.

Does Tesla use a Nvidia like Mapstream, or Intel like REM "sparse" yet accurate localization amd mapping technology?

Wonder if this is IP is similiar to AEYE which Intel and Airbus have invested in. AEye tested limits of IP at 1km though production is 200m

JR0101452 has details
[Intel EGI develops the next generation of high performance long range scanning LIDAR systems. This project involves cutting edge electro-optic systems, HW, FPGA/ASIC, mechanics, FW and SW.

We are looking for an experienced system validation engineer for the LIDAR validation team].

Within 10 days, two new and separate collaborations announce mapping platforms utilizing satellite imagery. Coincidence. Unlikely.

Is Google kicking themselves? I cannot believe Google could not partner with Amazon, Boeing, HERE and Intel and create an alliance like the JPM/BH/Amazon initiative,

Could Intel extract safety functionality similar to Vision Zero ADAS AEB APB from the Mapbox / REM Roadbook partnership and include with all L2+ ADAS kits with both REM and APB. Aftermarket "8 Connect" customers would not receive this OEM installed advantage though.

was this what the DoD post was about.

"Mapbox’s live location platform collects almost 250 million miles of anonymized road data daily, creating highly precise and dynamic geospatial maps. It integrates more than 130 data sources to capture valid addresses, places, and points of interest globally,.., The maps render at 60 frames per second, delivering a live video-like feed to the end user. ... The Vision SDK leverages connected cameras, working in conjunction with real-time traffic and navigation, to bring live visual context to the platform. Thus, developers on Android or iOS can create a heads-up display experience in their native apps. These ‘eyes at the edge’ detect vehicles, pedestrians, cyclist, construction sites, school zones, and live environmental conditions with more detail than ever before... Overall, the platform fundamentally enhances opportunities for application developers to deliver highly customizable experiences that push the creative envelope.”

And when you read the Medium blog from Mapbox about REM, May 30, 2018, new safety functionality could easily be added to L2+ from REM addition to a kit.

Mapbox platform could easily could be a key actor in a creative alliance.

Mapbox did release this Vision Software Development Kit (SDK) not long ago.

Hard to believe it took till 2019 for NNs to be able to process satellite imagery for road trajectories and some semantic road features at a less than 10-12cm conversion process that is pratical, even if not TerraSAR-x or Worldview-3 data sources. 10cm accuracy extracted from manipulating satellite imagery is that bleeding edge NN and computational power demanding of tasks? No one attempted to use high resolution satellite images 3 years ago for a PoC ?

Mobileye sets up LIDAR.AI laser sensor division.

Eonite has been integrated into a new division set up by Mobileye called LIDAR AI.

Eonite is the third startup bought or invested in by Mobileye (or Intel's autonomous car division) in the past two years. The names of the other two companies have not been reported.

Eonite Perception last November by Mobileye's parent company Intel Corp., for several tens of millions of dollars, sources inform "Globes.

LIDAR needs to come down in cost significantly," he said. LIDAR is based on time-of-flight calculation of lasers sent out to map the trajectory of the road. "It's all based on a burst of laser, and then steering it, and it's costly," he observed.

Using Intel's technology for what's known as silicon photonics, the company will provide a different approach, which he described as "mimicking radar using coherent light."

"If you can imagine it, you can put all of the laser functions on a single chip," he said. "That tells you there is a horizon for new technology that's going to make this possible."




Apple patent titled “Depth perception sensor data processing” the sensor data processing system collects data using both passive and active sensor devices, and uses the data to generate an overall map of the environment immediately around the vehicle. Using a "confidence" algorithm that can allow a car's computer to get just enough data from sensors to perform ....

Localization without real time maps with intelligent data attached is what youu mean I assume?

There is a limit to what camera vision can extract from the surroundings of immediate value.

Nvidia and Toyota and Intel all reference an intelligent map partner or map layer which is where localization extracts additional value.


check Aurora's design manifesto "Approach to Development"

With Aurora, Toyota, Baidu, Daimler, Volvo, Bosch, ZF and more all using the DRIVE Xavier platform, does Apple, Waymo, Intel, BMW, Ford, Drive.ai etc all just plateau?

VW using 14 cameras, 11 lasers and seven radars in this weeks Hamburg SDV L4 trials launch exposes little insight of hw-sw partners. No mention of compute platform.

Commenting on if any OEMs, and what Tier 1s, and Tier 2s partner with Apple is another point in time worth annotating

If Baidu launched noticably more SDV cars in the US, does that indicate anything?

reading Phillip Koopman's research like

The Human Filter Pitfall for autonomous system safety

on AEB seems to imply localization requires a map as a sensor, and not just geometry.

Aurora's comment / We are building a series of safety measures which identify changes in this information and will ultimately allow vehicles to adapt before maps are updated /

suggests Aurora will struggle just as much as Intel.

Have not seen any review of Nvidia approach to localization, mapping or this SFF.

Not sure what you are describing.

system safety
The Human Filter Pitfall

Sorry. Yes, requires a map. Did not see the "without" prior to original response.

Camera to be used for mapping by Netradyne working with Hyundai like Toyota TRI-AD and Intel.

Is Lidar losing appeal for SDV maps?

Cannot believe Chinese OEMs via Baidu, Navinfo, etc do not all migrate to camera based HD maps as well.

Could lidar lose the battle to cameras to map the unmapped 98% to 99% of global roads?

Would be a great story if the mapping arena creates a new dynamic in the 2019 SDV development space, and is one of the lead paradigm shifts that occurs.

Something simply feels like the SDV world is rethinking locolization.

very minor observation, but "sensor technologies" seems to be of note.

Emerging Growth Incubation (EGI) group at Intel is led by Mobileye's Mr. Sagi Ben Moshe who is Senior Vice President, Sensor Technologies at Mobileye and also
Vice President, Emerging Growth Incubation (EGI) Group, Intel.

Moshe led Intel RealSense.

EGI is / VR/AR devices, Smartphones, drones, drones & autonomous driving / focused.

Also, / Intel EGI develops the next generation of high performance long range scanning LIDAR systems /

Also. / Job ID: JR0097624 Job Category: Engineering Primary Location: Petach-Tiqwa, IL Other Locations: Job Type: Experienced Hire Senior Automation Test Engineer Job Description
Intel EyeC is seeking for a Senior Automation Test engineer to lead the Automation tests throughout validation and system integration of RADAR. Experience in Plan, design, develop and execute automation infrastructure. Server, Database, automation tests, Analysis tools, GUI.
You will lead the automation test requirements of all product validation phases Post-Silicon, Antenna and Full Product End to End and analysis tools for SW regression tests.
You will work with system integration, validation and SW engineers side by side during complex RADAR development.
You will be at the forefront of a new era of computing, communication and automotive involving state-of-the-art technologies, and will be in charge to develop new state of the art methods for validating these complex products to a very high level of quality.
EyeC organization is engaged in the development of products based on millimeter Waves technologies, and new innovations are created to enable exciting new capabilities and use cases. The organization develops a new Radar sensor based on a disruptive architecture and a state of the art solution delivering best in class performance. The organization includes highly skilled engineers in all areas of System, Si, RF, HW, Algorithms, SW&FW, Production, etc.
The group is located in Petach Tikva. /

Intel+Mobileye working on a branded Lidar/Radar suite ?

Vice President, Emerging Growth Incubation (EGI) Group, Intel Corporation

Job ID: JR0097624 Job Category: Engineering Primary Location: Petach-Tiqwa, IL Other Locations: Job Type: Experienced Hire Senior Automation Test Engineer Job Description
Intel EyeC is seeking for a Senior Automation Test engineer to lead the Automation tests throughout validation and system integration of RADAR. Experience in Plan, design, develop and execute automation infrastructure. Server, Database, automation tests, Analysis tools, GUI.

EyeC organization is engaged in the development of products based on millimeter Waves technologies, and new innovations are created to enable exciting new capabilities and use cases. The organization develops a new Radar sensor based on a disruptive architecture and a state of the art solution delivering best in class performance. The organization includes highly skilled engineers in all areas of System, Si, RF, HW, Algorithms, SW&FW, Production, etc.
The group is located in Petach Tikva.

JAP and KOR oems select cameras for mapping. China next.

best description of project

Intel EyeC is seeking for a Senior System Integration and Validation Architect with an extensive experience in integrating and validating Radar Systems throughout the entire process of a new product development.
The validation will follow the product development phases of Simulations and Pre-Silicon, Silicon level, Board level and Full Product End to End.
As the validation architect, you will define and design a complex Integration and validation setups using state of the art RADAR test equipment's, design RADAR simulators to cover both lab tests and field tests toward RADAR product validation vs product goals, covering both functional, performance, power, stability of a full product.
You will be at the forefront of a new era of computing, communication and automotive involving state-of-the-art technologies, and will be in charge to develop new state of the art methods for validating these complex products to a very high level of quality.
EyeC is engaged in the development of products based on millimeter Waves technologies, and new innovations are created to enable exciting new capabilities and use cases. The organization develops a new Radar sensor based on a disruptive architecture and a state of the art solution delivering best in class performance. The organization includes highly skilled engineers in all areas of System, Si, RF, HW, Algorithms, SW&FW, Production, etc.
The group is located in Petach Tikva.

camera will be used for mapping.

Lidar a new line of business

from Glassdoor, and spells out EyeC group

EyeC Radar: The EyeC organization is working on delivering innovative products for Autonomous vehicles that enable true redundancy for safety and a more comfortable driving experience.

Localization and mapping is becoming a 2019 headline act.

redundancy for safety and a more comfortable driving experience.

Intel competing in radar space must be an outcome of unusual IP from R&D. Radar has been around forever. EyeC has to be about edge computing, AI, programmability, 2D/3D etc., but what possible IP warrants this undertaking. RealSense VIO / V-SLAM is not revolutionary.

Your localization and mapping combined value seems to be a possible additional impetus beyond redundancy, based on some unknown IP.

This news still

Hiring for EyeC as early as Sep 2017, possibly August.

Toyota TRI-AD selection of camera based solution choice for HD maps has to be a statement to the auto but not sure entirely what is all that is encompassed. Even though still a PoC with Carmera, imagine internal dialogue inside DeepMap or Civil Map or Aurora or Baidu sizing up the PoC.

Dynamic Map Platform company merged with Ushr (GM SDV map) and looks like Softbank will be a Carmera partner/investor as mapping begins in H2 of 2019 per press referencing Softbank.

"DMP and Softbank create real-time map generation using 5G for the future era of autonomous driving"

see "Developments to Date and Future Plans at Dynamic Map Platform" pdf as well.

Feb. 22, 2019. DMP
Demonstration experiment on high-definition 3D map "Dynamic Map" for autonomous driving.

This does make mapping slightly different now.

Confused: Softbank/DMP use lidar, Carmera/Toyota use camera.

Not sure if Toyota TRI-AD AMP / Carmera is a DMP/Softbank partner now, unless AMP has a multitude of localization layers like HERE.

Toyota TRI-AD AMP to use satellite imagery to build maps.

TRI-AD, Maxar Technologies and NTT DATA collaborate.


Add new comment