Nvidia simulator and Safety Force Field and other news from GTC


This week I am at the Nvidia GPU Technology Conference, which has become a significant conference for machine learning, robots and robocars.

Here is my writeup on a couple of significant announcements from Nvidia -- a new simulation platform and a "safety force field" minder for robocar software, along with radar localization and Volvo parking projects.

News from Nvidia GTC


The radar localization looks to be a great addition, I'm surprised it is not already part of localization, but then maybe localization is already pretty good except in circumstances like you mentioned of snow on the road or fog. Obviously physical location is not all of localization , lane markings can't be treated as though they are permanent fixtures so LIDAR/vision is needed to locate within a lane.

Selling/booking car parking spots is something I've thought about in passing before. We have a minor product called SignUp which reserves sessions on workstations in public libraries and university labs (you are automatically logged when your session is finished if someone is booked following you). I suggested that carparking might be similarly suited to a booking system.
If local councils are not keen on people selling their slots there are plenty of privately owned car parking lots/buildings who could run it as a service.
Such a system would seem to be most useful for irregular trips such as a large sporting/musical event. People might be happy to pay a premium to have a guaranteed park, without the stressful 15 minute hunt in heavy traffic. Congestion presumably could be lessened as people drive directly to their destination. Obviously integration with GPS/Google Maps etc would be a requirement.
Once all know carparks are full the area could be restricted to residents and mass transport.
Hmm while writing this I get the feeling you've already done this topic a few years ago :)
Interesting also your Nvidia shareholding.
Based only on driverless vehicles (including deliver bots) I bought some of the following a few years ago Nvidia, Google, Amazon, Boeing, Alibaba for a 20 year hold.
The first 2 for obvious reasons, Amazon and Alibaba because I think even if delivery bots enable mum and dad stores to compete with Amazon, I think Amazon and Alibaba will get in there early and big. Boeing because I think any advances in ground traffic convenience will actually make non just air travel, but also tourism more attractive. Time will tell.

There are already many thousands of parking lots that will let you reserve space using a wide variety of parking apps.

Of course, for lots, they would rather not be full. That means they set their price too low. They would rather be almost full at a high price than full at a lower price.

This is why I think Tesla has one big advantage over most of the other companies. They have billions of miles of real world data from a wide range of locations, times, conditions, etc.

It's highly unlikely they're actually using that data as well as some other companies would. But without that amount of data I don't think the problem is solvable except in tightly geofenced locations.

Tesla isn't alone there, though. Some other car companies likely have lots and lots of data, but just don't brag about it as much.

It is often said Tesla has that data, but quite a few bits of information suggest that they don't. That Teslas are not transmitting this sort of data back to HQ (people have put sniffers on the connection) and they aren't using it. They do get data on accidents, but not things like video, at least as far as those trying to figure it out believe.

They're surely not sending back live 24/7 video of every mile, if that's what you're asking.

I'm not sure what the value of post-perception testing is. What exactly are you testing? The basic logic is simple. Don't crash into anything. At least, not anything significant. If you had data on how humans/animals/traffic-lights/road-hazards/etc behave you could make a good post-perception test, maybe. But if you had accurate data on how things behaved, you would have coded the logic correctly already. If there's a 0.0001% chance that there's a human standing in your lane (or about to enter your lane), you do something to avoid crashing into them. The trick is figuring out when there's a 0.0001% chance that there's a human standing in your lane (or about to enter your lane).

Pre-perception is the way to go. Specifically, using actual recordings of real world pre-perception data. And with that, you don't just test "will I crash into anything." You test whether humans are recognized as humans and bicycles are recognized as bicycles and how early they are recognized. You test whether the system correctly predicts what routes the "objects" are going to take. When things happen that your system predicted had zero chance of happening even though it should have predicted there was a chance of it happening, you investigate why.

Perhaps most importantly, you are constantly feeding new recordings into the system. You can run old recordings as a regression test, but regression tests are not an indicator of how reliable the system is. As you correctly point out, engineers will simply fix all the problems against a static test until they perform perfectly. (But they'd no doubt introduce new problems while doing so, unless you are constantly feeding them with new, real-world tests.)

The large car manufacturers have a significant advantage here. But I think I can see Waymo's strategy, and it might work: Build thousands of cars. Deploy them, with safety drivers, as a taxi service in strategic areas (read: Arizona) at a price that can compete with Uber/Lyft without losing money too quickly, and then gradually expand from there.

Uber has 18,000 contract drivers and 1,000 employees in Arizona. If Waymo can get 2,000 cars that do 40,000 miles a year, that's 80,000,000 miles/year of real world data.

Is that enough? Probably not. I still think Tesla has the better plan. Get people to collect data for you for free. Tens of millions of miles a day. Unfortunately (for them, and for society), that means foregoing LIDAR. But when and if LIDAR becomes affordable (and it's probably inevitable), it'll be much easier for Tesla to install LIDAR in the 1,000+ cars/day they're producing than it will be for Waymo to ramp up production from 10 cars/day to 1,000+ cars/day.

The perception and post-perception systems are fairly divisible. And the post-perception art is quite involved. The perception part I refer to is the system that distills raw sensor data into a simpler form. After that, you have the systems that try to make meaning of it, and assign predictions to all obstacles and where they are going. And from that you have the path planning and execution of the plan.

But there are two main kinds of errors. The perception system can fail to segment its view of the environment or identify an object, and you do a lot of testing of that to make it better and better. But it's only modestly useful to test how good it is at identifying simulated things. You really need to know how good it is at identifying real things. There's no substitute for getting out there and capturing real sensor data of the real world and seeing how well the perception system figures out what it is.

The processing after that is fairly orthogonal. Not perfectly orthogonal, so you will do some full pre-perception simulation. The most important thing there is dealing with the way the view changes when you move in reacting to things in the world.

But it's key to test what the car does once it has seen (imperfectly) what's out in the world. That's why your simulated perception system won't report the precise ground truth for most of its test, though it will for some. You will set it to make mistakes of the sort your perception system makes. Missing objects. Misclassifying objects. Getting the wrong location for objects. Winking in and out on objects. Complete sensor shutdowns. Noisy response etc.

Then you get to test things like what you will do when you encounter situations like cut-ins, surprise pedestrians, and literally a million different traffic patterns. You can do this orders of magnitude faster, so even if you decide it's not as useful, you are still going to be doing much more of it.

But they both are useful. But teams are testing their perception system on real world data all the time; that's what all real road driving does. What they are not testing all the time is how the car reacts to rare situations on the road.

Teams that are all vision based have a different view, since most of their questions today are about if their vision perception system works. LIDAR tools pretty much always detect there is something there (once it gets close enough) and their concern is more about figuring what it is and where it's going.

"There's no substitute for getting out there and capturing real sensor data of the real world and seeing how well the perception system figures out what it is."

Absolutely. But the car that captures real sensor data of the real world doesn't have to be running the same software that you're testing.

"LIDAR tools pretty much always detect there is something there (once it gets close enough) and their concern is more about figuring what it is and where it's going."

I guess I'm confused what "post perception" means. Figuring out what it is is perception, isn't it?

Figuring out where it's going isn't exactly perception. In many cases figuring out what it is is equivalent to figuring out where it might be going. In the case of adult pedestrians, perceiving things like "gait, body symmetry, and foot placement" (https://economictimes.indiatimes.com/magazines/panache/self-driving-cars-get-smarter-they-can-recognise-and-predict-pedestrian-movements/articleshow/67975471.cms) is crucial to figuring out where it might be going.

More importantly, no simulation is going to be useful for testing this. Only real world data on how things move can help here.

You can simulate at many levels of perception. Pure post-perception simulation means the simulator hands you the same things your perception engine did. "We detect a pedestrian at these coordinates with this confidence." Possibly "moving this direction at this speed." Or it may ask you to figure out the direction and speed. Or you could have the simulator segment the sensor data and say, "We detect this group of points or that group of pixels, you figure out what they are."

The main point is that since it's less valuable to test your segmenters, recognizers and classifiers on synthetic data, you bypass that, and it saves you immense amounts of time so you can simulate vastly more miles.

In what way is it useful to test using synthetic data at all? Shouldn't you pretty much always use real world data?

I guess another way to ask this is, how do you know if the car did the right thing in response to the simulation? What are you testing?

Well, any pre-perception simulation (other than a replay of logs) is synthetic data. To my surprise, people are even getting advantages training with synthetic data (for how they perform on real world data.)

You have to use simulators for some things, like accidents.

"other than a replay of data"

Replaying data, either pre- or post- perception data, can be useful. Much more useful than synthetic data.

It's also possible to replay data from accidents. These can be accidents that happen accidentally. Or it can be "accidents" (crashes) that happen intentionally (with crash-test dummies, for instance). Admittedly, this is data that's hard to come by (in the first instance) or expensive to come by (in the second), but it's also much more useful than synthetic data.

That said, synthetic data will be more useful once we've figured out how to build a self-driving car, to test the designs of other cars, as part of building that first self-driving car is figuring out how the real world (particularly human/animal controlled aspects of the real world) operate. It's easy to realitically simulate basic physics. It's not easy to realistically simulate human behavior. And testing a car against an unrealistic simulation decreases safety.

I should also point out that a test using replay of data has to be done right. In some instances you might have to let the car know that you're using replayed data. Otherwise, you're teaching it the wrong thing to do when, for instance, the brakes fail. Another alternative, which is trickier but usually better, is to give the car the replayed data and not ask it what to do, but rather ask it what it's "seeing" and what it predicts will happen.

Again, the logic part is fairly simple. Don't hurt anyone. Don't damage things. Follow the law. Get to your destination. (And don't destroy humanity, which of course is the 0th law.) The car itself will mostly follow simple rules of physics. The tough part is figuring out what the stuff outside the car is, is going to do, and will affect the operation of the car.

Replay of data can be good for testing perception of course, and that the planning system makes the same decisions as it did before (Regression testing.) However, as soon as the system decides on a different plan, it's hard to run replayed data because the sensor data does not show the car responding as the computer is commanding it to.

I saw one project that uses an interesting approach for vision data. They capture a wider field of view than needed, and so during playback, they can tolerate the vehicle pose and other factors deviating a little bit from the original recording, by approximating the new camera view. Obviously this has limits but it apparently has some value. You can also do this to some extent with LIDAR.

But for a true simulation you need to re-render so that the sensor data matches what happens when the vehicle steers/brakes etc.

I guess I'm not sure why you need to use a "true simulation." Is it just because it's easy?

I guess in the case of a government mandated test, you have to have something relatively easy. But for actual development, you're not just testing the decisions. As I said above (several times, in several different ways), the decisions are easy. Don't crash into anything. (#)

Perception is the tricky part. Prediction is the tricky part. Recognition and classification (if you don't consider that part of perception) is the tricky part.

It's not too hard to run replayed data and ask the car what it sees and what it predicts is going to happen. Adding in what it would do in such a situation would be a little bit more work, as it requires a bit of counterfactual "reasoning" (the car has to "pretend" that the controls are responsive) but it's probably worth it to build that bit of counterfactual reasoning. And it's probably not too hard. Temporarily shut off any parts of the software that react to the controls not being responsive. (Unless you're testing those parts of the software! Maybe simulation is the best way to handle that, as you're not going to easily get much real world data about what happens when the brakes or the power steering fails.)

(#) Consider the Uber fatality. There was an error in the decision. The car recognized that it was about to crash into someone and didn't hit the brakes. But that was, in essence, an intentionally introduced wrong decision. You didn't need testing to detect that error. The tougher to fix errors were ones of perception (the car didn't classify a pedestrian as a pedestrian) and prediction (the car didn't predict that she was going to move into the car's lane until she actually did). Synthetic data simulations aren't highly likely to detect these errors. It was a scenario (woman walking her bike across the street at night outside of a crosswalk) that no one thought to program or test.

Or consider the Waymo crash with the bus. The problem was prediction. Not decisionmaking. (You could argue it was decisionmaking, by arguing that the car should always yield even if it predicts there's zero chance the bus won't yield. But that would probably be overly cautious, and a synthetic data test isn't likely to catch that unless the synthetic data test completely ignores human behavior and simply has all other vehicles act randomly, which would promote less safety.)

Perception is very important, and you test it as many ways as you can. Decision making is not easy. Consider the reports people make about cars waiting too long to make unprotected lefts, having trouble doing merges, deciding what to do in unusual situations. Figuring out how to drive in places you must be aggressive to make progress. These are all matters of the planner. The perception is presumably working just fine. Likewise when it comes to mitigating and avoiding accidents. In Uber's fatality, clearly their perception system needed improving, but so did their evasive moves.

There's tons to test with decision making and path execution. And tons to test with perception. But you can do quite a lot testing them independently. Perception is best tested on real world data. Motion plans are well tested on real world situations but those can be imported from the real world and synthesized.

It's much more common, I will wager, for a car to encounter a situation it has never seen before than to encounter a road obstacle it has never seen before. And for the latter, again you want real data. Perception problems include things like figuring out if it's a bird that you should not slow down for, or a baby you should brake hard for. You can only do that with real data, but you also can't stick a live baby on the road, so people still simulate perception tests.

How are you going to fix the problem with unprotected lefts and trouble doing merges with synthetic data?

If your perception is perfect, and you know exactly how other cars (and pedestrians) behave, then the only thing you need to know is how much risk and law-breaking is acceptable. From there the task is simple.

Part of the problem with unprotected lefts in particular is that humans take risks, and break laws, that a robocar company probably wouldn't. A test isn't going to resolve that. Either people need to accept that robocars are going to be *much* more cautious than humans, or people need to accept that robocars are going to (rarely) cause injuries and deaths in order to (significantly) cut down on driving time. (Personally I'd argue for increased safety. I can deal with slower, safer rides in a robocar.)

Another problem is with prediction. It's a guessing game whether or not the pedestrian on the corner is going to cross in front of your unprotected left, whether she has the right of way or not. Tests can help with that somewhat, but only if you're using real data. Using synthetic data just gives you the *wrong* information about how often a pedestrian is going to do that. Using synthetic data *causes* the car to be either too safe or too risk-taking.

So, even in those two situations, I don't see how a test with wholly synthetic data can help.

As for evasive moves helping with the Uber crash, no, evasive moves were disabled. As I said earlier, yes, the decision to not hit the brakes was a mistake. But it was an intentional mistake.

For unusual (really, unexpected) situations, wholly synthetic data won't help you. If you don't expect it, you won't test for it. Real-world data will help a lot, on the other hand. With a few billion real world miles, there's very little that you haven't already encountered. (With a few tens of millions, especially if it's all in Arizona, there's a lot.) If you can shadow drive for 5 billion real world miles without encountering any significant, unexpected situation, you've probably built a robocar. If you can't, you probably haven't.

For all of it you want real world data. Maybe you mix real world data with synthetic data. But if it's wholly synthetic, I don't see the point.

I strongly believe that shadow driving is the way to go. Not only does it give you data to test against, it also gives you data on how humans have solved problems like protected lefts (and four-way stops, which Google learned early on are not handled in the real world the way they're handled in the motor vehicle statutes). Shadow driving is how humans learn how to drive at level 5. It's how robocars will learn how to drive at level 5 as well.

The problem in a merge or unprotected left is not that you don't see the other vehicles and identify where and what they are. The problem is you're unsure about what they will do. So you can create better models for a merge, and try them out in post-perception simulation, including post-perception simulations of vehicles doing all sorts of things you don't expect.

And when I say "post perception" I really mean "post classification." Watching a vehicle as it weaves, turns, puts on blinkers, slows and speeds up is part of perception, but it's done after the sensor data is turned into an object list.

You can observe a lot of drivers and build models of how they respond too, and then use those in your simulation. You will have a model of a driver who is aggressive in the merge, or timid, at different thresholds. You can learn how to spot them and how to deal with them, and test it and see what works.

I am not talking bout the exact Uber car in the Uber crash. I am saying how simulation of that, post perception, can help you learn what sort of evasive moves make sense, so that you could implement them. That they didn't trust their system enough to allow them is orthogonal.

Also, I am thinking more about swerving, which also would have worked there, even at the point where braking no longer could, since the road was empty. Swerving of course comes with risks -- the pedestrian might also try to evade and possibly move the same direction. You can test all this in sim.

The Uber event was not an unexpected or unusual situation. In fact, one reason we're all so upset at Uber is it's perhaps one of the most basic and expected types of pedestrian impact situations!

But again, there are two types of unexpected here. One is a shape you've never seen before and can't classify as to what it is. The other is motion of that object in a way you've never seen before. They are connected, but rather loosely. Something that looks very much liked a pedestrian or car can easily move in some unexpected way. Something that you can't identify might move in a straight line. (Of course, sometimes how it moves is part of how you identify it sometimes which is why it's never perfectly simple.)

This is why pre-perception simulation, while not useless, often does not justify its much higher cost. All you're doing is testing if your classifiers can identify something you've seen before (because you drew it.) It is true that you can do synthetic things like rotating and articulating and changing what you draw to find things you have never seen, and that's useful -- but only to a point.

Aerospace/DoD/FAA Simulation
Michael DeKort on Medium promoting Dactle

Another possibility is to mix real data with synthetic data. For example, mix the real behavior of a pedestrian crossing the street outside a crosswalk with synthetic data of everything else. That solves a lot of the parts about the car not responding as the computer is commanding it to, although it does make even the real data a bit fake because the pedestrian probably would have responded differently if the car had responded differently.

(I still think the important thing to monitor is what the car is recognizing and predicting, though. Presumably it's going to be smart enough to hit the brakes in any case. (Unless Uber executives are involved. :P) The question is, at what point does it recognize the situation? At what point does it predict that the pedestrian is in fact about to cross in front of it?)

why are there no leaks from OEM developers or insiders about perception errors or failures during non-public testing? Does not need to be corner cases, just outright miss. Does the term true redundacy needs defining? One never hears about total scope of difficulties of all sensors together in testing.
Is fusion, more so than redundancy, the bigger issue?
If Mobileye speaks of 100% surround vision plus REM plus RSS approach as viable, could that solution be closer to 99.99999% than a fusion system at this time.
The cost factor need not be factored into discussion.

With 1.5 billion vehicles currently on roads per Nvidia and VIO near 12 years, estimating up to 10 years for L4 vehicles as per MIT talk this week by Dr. Shashua until L4 totally unleashed outside of geo-fenced or pre-mapped spaces for all roads, with cost scaling and affordability implied, makes one scratch the head.
One can talk looking over horizons, maps, compute power, AI in algos, open source, simulators, new sensors, datasets, scaling of costs be it original or repair, development costs, security, regulatory bodies, legalities, geography. weather, public acceptance, ... could list gazillion more ..., have not heard anything insightful other than it's about $$$.

the executive committee of new Daimler and BMW collaboration on AV will pick suppliers
Guess will know soon

Is the AV dilemma ultimately about the transfer of legal responsibility from a human driver to another actor like AI or algorithm actors/hw-sw actors, aggregators like car mfgs-suppliers, etc., which all are governed by select human input?

How and when does the transfer begin?

As AV dev pivoting to new partnerships to accommodate the complexity and cost of these emerging platforms, and independence may no longer exist due to legal more so than development costs, are there such vectors that can be labeled as the safe standard for regulatory bodies without the realm of vested statistics.

Is competition truly an arbitrator?

Is collaboration truly an arbitrator?

Is cost truly an arbitrator?

Are the vectors of standardization for safety testing better served by first focusing on legalities?

I am afraid open-source mantras do not have superiority in reducing randomness or the infinite scope of possibilities.

All that changes are timelines.

Everybody new the field always talks about liability like it's a complex or big issue. It's actually pretty boring. Everybody knows that liability will fall upon the developers/vendors/operators of robocars, not on their passengers or even occasional drivers. Several major companies have declared it directly, the rest all know it.

So the only issue is how to make the vehicles as safe as possible to reduce the total liability. There are some issues at making sure that liability in any given accident does not go crazy out of whack with where it is today. But, just as it is today with insurance, the cost of the liability will be bundled in with the cost of the vehicle or ride.

energy versus time

Is Hyundai Mobis announcement today at KINTEX an another endorsement of cameras though possibly a loss for Mobileye.

Intel-Mobileye camera-vision only approach development budget would be interesting to know.

Greeters new commenter. You're posting all sorts of comments that contain language from the field but don't seem to say very much, or just quote articles. I hope you will move to saying more concrete things (and also adopting a pseudonym instead of "anonymous".) If this is some sort of spam test, know that meaningless posts with links just get deleted or blocked.

does new Intel EyeC radar development project in automotive alter economics of ADAS or SDV sensor suites?

Localization technology with data collection requirements built into every vehicle starting at the earliest possible date would do more to accelerate safety and quality of life than one can imagine. And the technology pays for itself and then some. Why this concept receives little attention is a shame.

The global real-time map could be built and up to date while the AV tech is maturing.

Governments, municipalities, corporations and citizens access benefits immediately that are not debatable.

And parking spaces or signage are low on the benefit list.

Available ADAS bundled with localization alone would save lives.

AEB, ACC, FCW, IHC, LC, LDW, LKA, LC, TJA, TSR, etc all entering a zone or condition known for certain nuances act on past intelligence as well as present.

And I have not even left the ADAS arena for other benefits, but can :-) and will.

Localization with data and currently available ADAS is the assistant that is deliverable as we speak that best serves the largest population.

The 2nd LC above should be PCAM.

ANONYMOUS if Carmera localization data. or Intel REM data, or Netradyne data provides other non-exclusive or exclusive insight [if applied as you claim] beyond the SDV localization including trajectories, road features, the physical environment, and obvious (eg. crest of a hill) this revelation is obscure despite reading your post "DoD" post several times.

SAIC Kotei Big Data HD map is well along in China, and SAIC does use Intel REM as one of the data sources. SAIC multi-source acquisition process also developing with DeepMap. SAIC will have the complete highway network in China covered plus 33.000km in urban environments by 2019 per OpenAutoDrive Jul 2018 meeting.

I look forward to the Toyota Carmera PoC primarily to see the how the camera vision technology distinguishes itself.

1.4 billion people
200 million private vehicles
22.5 million cars est. sell in 2019
possibly 150,000km of expressways
Toyota sold 1.47m in 2018

ID Roomzz by VW will "launch initially" in China in 2021 with Level 4, IQ.Drive. Does China lead the SDV space for most L4 production model series on the motorways in 2021.

Is Aptiv the first Tier 1 from overseas? Aptiv bases in Shanghai with SAIC, NIO, and BMW.

Of Pony.ai"s workforce of 80, over 1/3 hail from Tsinghua University,

SDV tests for Pony: "demonstrating 39 capabilities across six categories of tests, and completing 10 days of “holistic” safety and operations evaluations."

"The complexity is good for effective data collection. For example, we have much more data related to bicycle riding in China”

China license plates registry for ICV (intelligent-connected vehicle) needs a public website.

April 18. China Builds Site to Test Autonomous Cars in Highway Conditions
26-kilometer-long testing site in Shandong province.

According to data released by ["www.autoinfo.org.cn"] before January 5, there were a total of 101 licenses for autonomous vehicle road tests having been issued in China to a total of 32 companies related to self-driving technology, including Internet firms, OEMs and car-sharing platforms, across 14 cities in China.

Baidu and Pony.AI lead in mileage.

> Baidu obtained over 50 licenses, > > most OEMs hold no more than 3 licenses per company.

Beijing with 60 licenses, followed by Chongqing and Shanghai who had gained 8 and 11 licenses respectively. ????

Beijing 60 licenses to 10 cos., 44 routes 123km, tested mileage exceeding 153,600km by Dec 2018.
10 cos. Baidu(Ford), Tencent, IdriverPlus, Pony.ai, DiDi, NIO, BAIC BJEV, Daimler, Audi and ,

Shanghai, first were SAIC Motor and NIO. Baidu, Audi, BMW, Mercedes-Benz, BAIC BJEV,TuSimple, Momenta Pony ai., Panda Auto, . 37.2-km-long road.

Chongqing with complex terrain, has 7 cos Changan Automobile, Baidu, FAW Group, Dongfeng Motor(PSA), GAC Group, Geely(Volvo), Foton Auto and NEV rental platform Panda Auto.

Changchun - FAW

Shenzhen - Tencent

Fuzhou - Baidu, King Long

Baoding - Baidu

Changsha has Baidu late 2019

Tianjin - Baidu

Wuxi - Audi/ SAIC

Localization without real time maps with intelligent data attached is what youu mean I assume?

There is a limit to what camera vision can extract from the surroundings of immediate value.

Nvidia and Toyota and Intel all reference an intelligent map partner or map layer which is where localization extracts additional value.


check Aurora's design manifesto "Approach to Development"

With Aurora, Toyota, Baidu, Daimler, Volvo, Bosch, ZF and more all using the DRIVE Xavier platform, does Apple, Waymo, Intel, BMW, Ford, Drive.ai etc all just plateau?

VW using 14 cameras, 11 lasers and seven radars in this weeks Hamburg SDV L4 trials launch exposes little insight of hw-sw partners. No mention of compute platform.

Commenting on if any OEMs, and what Tier 1s, and Tier 2s partner with Apple is another point in time worth annotating

If Baidu launched noticably more SDV cars in the US, does that indicate anything?

reading Phillip Koopman's research like

The Human Filter Pitfall for autonomous system safety

on AEB seems to imply localization requires a map as a sensor, and not just geometry.

Aurora's comment / We are building a series of safety measures which identify changes in this information and will ultimately allow vehicles to adapt before maps are updated /

suggests Aurora will struggle just as much as Intel.

Have not seen any review of Nvidia approach to localization, mapping or this SFF.

Not sure what you are describing.

system safety
The Human Filter Pitfall

Sorry. Yes, requires a map. Did not see the "without" prior to original response.

Camera to be used for mapping by Netradyne working with Hyundai like Toyota TRI-AD and Intel.

Is Lidar losing appeal for SDV maps?

Cannot believe Chinese OEMs via Baidu, Navinfo, etc do not all migrate to camera based HD maps as well.

Could lidar lose the battle to cameras to map the unmapped 98% to 99% of global roads?

Would be a great story if the mapping arena creates a new dynamic in the 2019 SDV development space, and is one of the lead paradigm shifts that occurs.

Something simply feels like the SDV world is rethinking locolization.

very minor observation, but "sensor technologies" seems to be of note.

Emerging Growth Incubation (EGI) group at Intel is led by Mobileye's Mr. Sagi Ben Moshe who is Senior Vice President, Sensor Technologies at Mobileye and also
Vice President, Emerging Growth Incubation (EGI) Group, Intel.

Moshe led Intel RealSense.

EGI is / VR/AR devices, Smartphones, drones, drones & autonomous driving / focused.

Also, / Intel EGI develops the next generation of high performance long range scanning LIDAR systems /

Also. / Job ID: JR0097624 Job Category: Engineering Primary Location: Petach-Tiqwa, IL Other Locations: Job Type: Experienced Hire Senior Automation Test Engineer Job Description
Intel EyeC is seeking for a Senior Automation Test engineer to lead the Automation tests throughout validation and system integration of RADAR. Experience in Plan, design, develop and execute automation infrastructure. Server, Database, automation tests, Analysis tools, GUI.
You will lead the automation test requirements of all product validation phases Post-Silicon, Antenna and Full Product End to End and analysis tools for SW regression tests.
You will work with system integration, validation and SW engineers side by side during complex RADAR development.
You will be at the forefront of a new era of computing, communication and automotive involving state-of-the-art technologies, and will be in charge to develop new state of the art methods for validating these complex products to a very high level of quality.
EyeC organization is engaged in the development of products based on millimeter Waves technologies, and new innovations are created to enable exciting new capabilities and use cases. The organization develops a new Radar sensor based on a disruptive architecture and a state of the art solution delivering best in class performance. The organization includes highly skilled engineers in all areas of System, Si, RF, HW, Algorithms, SW&FW, Production, etc.
The group is located in Petach Tikva. /

Intel+Mobileye working on a branded Lidar/Radar suite ?

Vice President, Emerging Growth Incubation (EGI) Group, Intel Corporation

Job ID: JR0097624 Job Category: Engineering Primary Location: Petach-Tiqwa, IL Other Locations: Job Type: Experienced Hire Senior Automation Test Engineer Job Description
Intel EyeC is seeking for a Senior Automation Test engineer to lead the Automation tests throughout validation and system integration of RADAR. Experience in Plan, design, develop and execute automation infrastructure. Server, Database, automation tests, Analysis tools, GUI.

EyeC organization is engaged in the development of products based on millimeter Waves technologies, and new innovations are created to enable exciting new capabilities and use cases. The organization develops a new Radar sensor based on a disruptive architecture and a state of the art solution delivering best in class performance. The organization includes highly skilled engineers in all areas of System, Si, RF, HW, Algorithms, SW&FW, Production, etc.
The group is located in Petach Tikva.

JAP and KOR oems select cameras for mapping. China next.

best description of project

Intel EyeC is seeking for a Senior System Integration and Validation Architect with an extensive experience in integrating and validating Radar Systems throughout the entire process of a new product development.
The validation will follow the product development phases of Simulations and Pre-Silicon, Silicon level, Board level and Full Product End to End.
As the validation architect, you will define and design a complex Integration and validation setups using state of the art RADAR test equipment's, design RADAR simulators to cover both lab tests and field tests toward RADAR product validation vs product goals, covering both functional, performance, power, stability of a full product.
You will be at the forefront of a new era of computing, communication and automotive involving state-of-the-art technologies, and will be in charge to develop new state of the art methods for validating these complex products to a very high level of quality.
EyeC is engaged in the development of products based on millimeter Waves technologies, and new innovations are created to enable exciting new capabilities and use cases. The organization develops a new Radar sensor based on a disruptive architecture and a state of the art solution delivering best in class performance. The organization includes highly skilled engineers in all areas of System, Si, RF, HW, Algorithms, SW&FW, Production, etc.
The group is located in Petach Tikva.

camera will be used for mapping.

Lidar a new line of business

from Glassdoor, and spells out EyeC group

EyeC Radar: The EyeC organization is working on delivering innovative products for Autonomous vehicles that enable true redundancy for safety and a more comfortable driving experience.

Localization and mapping is becoming a 2019 headline act.

redundancy for safety and a more comfortable driving experience.

Intel competing in radar space must be an outcome of unusual IP from R&D. Radar has been around forever. EyeC has to be about edge computing, AI, programmability, 2D/3D etc., but what possible IP warrants this undertaking. RealSense VIO / V-SLAM is not revolutionary.

Your localization and mapping combined value seems to be a possible additional impetus beyond redundancy, based on some unknown IP.

This news still

Hiring for EyeC as early as Sep 2017, possibly August.

Toyota TRI-AD selection of camera based solution choice for HD maps has to be a statement to the auto but not sure entirely what is all that is encompassed. Even though still a PoC with Carmera, imagine internal dialogue inside DeepMap or Civil Map or Aurora or Baidu sizing up the PoC.

Dynamic Map Platform company merged with Ushr (GM SDV map) and looks like Softbank will be a Carmera partner/investor as mapping begins in H2 of 2019 per press referencing Softbank.

"DMP and Softbank create real-time map generation using 5G for the future era of autonomous driving"

see "Developments to Date and Future Plans at Dynamic Map Platform" pdf as well.

Feb. 22, 2019. DMP
Demonstration experiment on high-definition 3D map "Dynamic Map" for autonomous driving.

This does make mapping slightly different now.

Confused: Softbank/DMP use lidar, Carmera/Toyota use camera.

Not sure if Toyota TRI-AD AMP / Carmera is a DMP/Softbank partner now, unless AMP has a multitude of localization layers like HERE.

Softbank, Toyota, Denso with Uber interest of 14 percent may incline Uber to look at Carmera and DMP. Uber’s head of visualization joins Mapbox Nico Belmonte to join as GM Maps was just announced.

Uber was a large partner with Mapbox. Is Softbank looking at Mapbox?

Maps again with cameras - Ambarella and Momenta Unveil HD Mapping

Using CV22AQ, Momenta is able to use a single monocular camera input to generate two separate video outputs, one for vision sensing (perception of lanes, traffic signs, and other objects), and another for feature point extraction for self-localization and mapping (SLAM) and optical flow algorithms.

So that is 4 new intros in 60 days

blimey bloke.

Intel post from SAE World Congress today.

"While RSS was originally envisioned for AVs, we can apply it to ADAS solutions *NOW WITH IMMEDIATE IMPACT*. This is what I believe is the next revolution in ADAS."

"With a safety model that is fully measurable, interpretable and enforceable, we wondered: *WHY WAIT* for AVs to experience the life-saving benefits of this new reality? Let's find a way to allow human drivers to benefit from RSS"

Ok, for Mobileye L2++ Vision Zero ADAS first platform is a commercial facing rather than a consumer facing production model arrives first via Ford and VW. Bundling 8 cameras or more in a production vehicle is a minor commitment, and maybe a van, or truck arrives first.

psuedo-autonomy = Vision Zero ADAS = driver monitoring, surround-vision, REM, RSS, forward camera. Not a revelation outside new emphasis on driver monitoring. Will Intel develop this subsystem internally, purchase technology or leave open to marketplace.

A USA based OEM to announce RSS support in 2019 for a future USA production series is a noteworthy event.

Curiously, in the new safety tech proposal to be made mandatory by the EU for vehicles by 2022 includes driver monitoring.

At WCX, Intel commenting on Pedestrian behavior, movements and trajectory stood out as an intentional nuance as does no comments on driver monitoring.

Europe 2022
For cars, vans, trucks, and buses: warning of driver drowsiness and distraction (e.g. smartphone use while driving), intelligent speed assistance, reversing safety with camera or sensors, and data recorder in case of an accident (‘black box’).

For cars and vans: lane-keeping assistance, advanced emergency braking, and crash-test improved safety belts.

For trucks and buses: specific requirements to improve the direct vision of bus and truck drivers and to remove blind spots, and systems at the front and side of the vehicle to detect and warn of vulnerable road users, especially when making turns.

Hyundai Mobis going in-house has to create problems in Israel

Hyundai Mobis, announced during its meeting at the KINTEX Seoul Motor Show that the company would be the first in Korea to secure the global-calibre ‘deep learning-based high-performance image recognition technology’ for recognising vehicles, pedestrians and geographical features of the road by the end of the year, and begin to mass-produce front camera sensors supporting autonomous driving in 2022.

The ‘deep learning-based image recognition technology' consists of ‘image recognition artificial intelligence' that uses the automation technique to learn image data. If Hyundai Mobis acquires the technology this year, the company will possess most of the software and hardware technologies applied to autonomous driving camera sensors. In particular, it is planning to elevate the object recognition performance, which is the essence of the image recognition technology, to a level equal to that of global leaders.

“The deep learning computing technology, capable of learning a trillion units of data per second, is greatly improving the quality and reliability of image recognition data,” said Mr. Lee Jin-eon, Head of the Autonomous Driving Development Department of Hyundai Mobis, at this meeting. He added, “The amount of manually collected data used to determine the global competitiveness of autonomous driving image recognition, but not anymore.”

To apply the deep learning technology to cameras, Hyundai Mobis will also reinforce collaboration with Hyundai Motor Company. The company is planning to apply the deep learning-based image recognition technology not only to the front camera sensors for autonomous driving, but also to the 360° Surround View Monitor (SVM) through joint development with global automakers.

If the image recognition technology for detecting objects is applied to the Surround View Monitor, which has been used for parking assistance, automatic control will become possible, involving emergency braking to prevent front and broadside collisions during low-speed driving. Hyundai Mobis is planning to secure differentiated competitiveness in cameras and diversify its product portfolios by expanding the application of the image recognition technology.

In addition, the company will combine this image recognition technology with the radar that it already proprietarily developed, enhance the performance of sensors through data convergence (sensor fusion) between cameras and radars, and enhance its technological competitiveness in autonomous driving.

To this end, Hyundai Mobis doubled the number of image recognition researchers in the technical centres in Korea and abroad over the past 2 years. Currently, it will increase the number of test vehicles, used exclusively for image recognition, among the 10 or more ‘M.Billy' autonomous driving test cars, which Hyundai Mobis is operating around the world, from 2 to 5 by the end of this year. The company is also planning to increase investment in related infrastructure by 20% each year.

Hyundai Mobis seems intentionally allusive in this announcement, though most industry veterans could figure it out after several conversations with others.

The description is indeed confusing.
Alluding to "joint development with global automakeres" stands out as peculiar if the technology was physically part of a sensor or compute platform. Carmera or Toyota developing a custom ASIC or custom FPGA infused with sw does not sound reasonable.

And a VPU processes and does not "learn"

Scanning one of the better sources for image sensors called Image Sensors World generates no real insight.

Description does not fit Allegro.ai. A neophyte like myself imagines IBM PowerAI Vision for Automotive as an exemplary fictious reference model and wonder is this a translation issue. For that matter, Amazon DeepLens or Google Inception are similar.

The news release makes no sense, especially given the technology appears not ready to ship to customers yet.

And because of economics, an edge computing addition makes no sense. The compute platform handles ADAS and SDV host algorithms.

Marrying an AI image detection framework onto a neural net accelerator platform and automating object detection does warrant excitement.

Bundling object detection and image recognition as a plug-in algorithm in a camera sensor package is not what one would interpret from news release.

ODaaS '- Google Inception. Amazon Deeplens. IBM PoweAI Vision

Mobileye told VentureBeat that close to a million cars are funneling mapping data back to Mobileye’s cloud platform, in addition to 20,000 aftermarket units. Jan 2019

If the average driver drove a 1000 unique miles per year, that is a billion miles of mapped motorways.

But what good is it just sitting there?

Does the BMW and GM blockchain interest in the Mobility Open Blockchain Initiative (MOBI) save Intel SDV program in time.

Or does Toyota change things?

Speaking of Mobileye's biggest advantage, Tong Lifeng believes that it is the core algorithm of machine vision. There are more than 800 algorithm teams in Israel.

In Israel, there is a 100 L4 self-driving team. source EEAsia

Only 100 on the SDV AV team?

Wonder how many at Aurora or Waymo

Always useful to see headcount comparison, even if skewed.

Toyota is 1900+ in TRI group.

Aurora now has about 200 employees per reCode

No breakthroughs here but "intersection detection" could become part of a presentation by the Vision Zero ADAS AEB / LKA improvement.

Intelligent Driving Laboratory: Emphasis on learning from traffic accidents, simulation and safety verification.

The Intelligent Driving Laboratory was established in early 2018. It is dedicated to the research of key technologies of automatic driving, and the deep integration of automatic driving technology with intelligent transportation, intelligent city and other future directions, and the end-to-end technology research that promotes each other.

Wu Xiangbin, Director of Intel China Research Institutes Intelligent Driving Laboratory

According to Wu Xiangbin, director of the lab, the current research in the laboratory emphasizes learning about driving accidents, and through automatic deep accident analysis, automatic scene reconstruction and key scene library generation, and inheriting with the autopilot vehicle simulation tool, accelerate the performance simulation iteration and safety verification of the autopilot algorithm, and finally draw inferences.

In the aspect of road simulation for automatic driving, the laboratory has built random test for routine scene, key scene test and adaptability test for high-risk area. The test variables include extreme weather, lighting conditions, environmental visibility, road geometry characteristics and other combination conditions. In addition, Intel has sponsored an open source autopilot simulator.

In the aspect of vehicle-road coordination, traffic efficiency and safety are improved through intelligent real-time and full-view video analysis of key traffic scenarios. At the same time, Wu Xiangbin said that at present, the incidence of traffic accidents at intersections accounts for 50% of the worlds traffic accidents, so the laboratory will start from the intersection to study the construction of intelligent traffic intersections.

At the same time, the Intel Intelligent Network Lianhe Auto University Cooperative Research Center was officially launched in November 2018. As a connecting center, the Intelligent Driving Laboratory will cooperate with six research teams including Tsinghua University, the Institute of Automation of Chinese Academy of Sciences and Tongji University in the next three to five years to promote the large-scale practical deployment of automatic driving. This centre will study security, open data sets, human-machine interface/regulation and policy, advanced algorithms and architectures, networked vehicles and intelligent infrastructure.

The riddle is solved.

StradVision, which possesses deep learning-based camera image detection technology per Hwang Jae-ho

All the information can be referenced back to May 2018 when the radar projects started. Very detailed background information if you look

Use both headline for details
Hyundai Mobis Aims to Develop All Autonomous Driving Sensors by 2020

Hyundai Mobis Invests in an AI Sensor Startup for Developing Deep Learning Cameras9

This may explain why Intel EyeC radar group started hiring in August 2018.

Is EyeC radar a 4D imaging RADAR like Arbe ?

Does the talent wars cause the design of mmwave 4D imaging RADAR to linger like 5G smartphone modems.

The riddle is solved.

StradVision, which possesses deep learning-based camera image detection technology per Hwang Jae-ho

All the information can be referenced back to May 2018 when the radar projects started. Very detailed background information if you look

Use both headline for details
Hyundai Mobis Aims to Develop All Autonomous Driving Sensors by 2020

Hyundai Mobis Invests in an AI Sensor Startup for Developing Deep Learning Cameras9

This may explain why Intel EyeC radar group started hiring in August 2018.

More inclined to believe Daimler and not BMW, and Bosch loses out.

If a combined camera and radar suite
is key to success as Mobis states, Intel motivation for developing EyeC Radar could either be economic or performance optimization.

"can implement optimal performance for autonomous driving only by securing all of the three technologies (perception, decision and control)"

StradVision has won two orders from a Tier-1 company to supply StradVision’s object detection software for a premium German automotive manufacturer

Mobileye design or program wins in 2023-2024 will take a punch given likely losing design wins to Hyundai Mobis.

April 11. 2019
"How to Run Millions of Self Driving Car Simulations on GCP"
on YouTube

CIFUS objected to Navinfo investing in HERE, so Toyota TRI-AD's
Autonomous Mapping Platform AMP ambition to open source an HD map is great fodder for conspiracists.

For that matter, how and when Waymo, the German auto industry, GM, Apple, Intel+Mobileye, or Tesla react is hard to predict.

Quote about TRI-AD in interview in March 2019

"It is well known that Toyota is developing an open source autonomous driving HD map with the intention of grabbing 20 billion Markets"

SLAMcore or Perceptin.io founder Shaoshan Liu have been in VIO and V-SLAM space for some time. Is Intel Realsense r&d also in agreement with possible disruptive capability.

If the OEMs knew, where is the press on this story?


"The majority of modern visual SLAM systems are based on tracking a set of points through successive camera frames and using these to triangulate their 3D position; while simultaneously using estimated point locations to calculate the camera pose that could have observed them. If computing position without first knowing location initially seems unlikely, finding both simultaneously looks like an impossible chicken and egg problem. In fact, by observing a sufficient number of points it is possible to solve for both structure and motion concurrently. By carefully combining measurements of points made over multiple frames it is even possible to retrieve accurate position and scene information with just a single camera."

v-slam still needs gnss to initialize with so Toyota TRI-AD needs tech, as well as OTA tech. Not sure if Mobileye 8 Connect has GNSS tech inside, though it has some type of wireless/OTA tech to expedite REM collection transfer. Intel has both GNSS and modems in in-house portfolio though. Not sure how much of 8 Connect is Intel inside though.

I wonder how many domain experts from the auto industry work in Israel for Intel.

/ The Intel Capital president said the semiconductor giant will continue competing with Nvidia in Israel as it makes strides in autonomous vehicle technology led by Mobileye.

"We are well ahead of everyone else, and though we may not have fully autonomous cars until at last 2021, we will get there first,” Brooks said.

From 2019 Intel Capital Global Summit on April 1, 2019.

"Bloomberg quotes Erez Dagan, executive VP of strategy for Intel's Mobileye unit, as having said at conference in Detroit that Ford has signed on to join Mobileye's Road Experience Management, or REM, platform."

SAE World Congress reporters could pry out more from Intel this week.

VW and Ford could be ironing out another partnership beyond vans and trucks, or even Argo.ai.

Mobileye China presentation in recent EEAsia article shows Ford as well, so it could be a China arrangement.

Comments from Argo.ai or Civil Maps may not be proper here so do not expect any. Volkswagen and Ford are still in talks per Ford CEO Hackett yesterday.

The CES 2019 slide number #7 could be a new VW project or a reference to Isreal project.

Today's Ford CTO Ken Washington interview of Mobileye at SAE is available as a video replay.

With the re-alignment today of Ford
With Jim Farley as president, new businesses, technology & strategy starting May 1, petsonally do not expect much until then for legal reasons.

Will EyeC engineering samples rollout coincide with EyeQ6

EyeQ6 2022 production estimate
unveil / fab prod / auto prod modyr
Eyeq4 Mar,2015 fp Q22016 ap Q1 2018
Eyeq5 Jan,2018 fp Jan 2019 ap Q1 2020
Eyeq6 est Q42019 est Q32020 estQ12022

Does EyeQ5 go product H1 of next year? Previous Post is typo error.
Bit of stretch so I doubt it.

H2 likely given SDK and on-die Atom.

"Toyota's budget and technology resources are inexhaustible, and it resolutely refuses to adopt Mobileye's technology. Toyota has always claimed that it could achieve better results independently than those of Mobileye, and therefore has no need to tie itself to the Israeli company's closed system. Since Toyota has close ties with a number of Japanese tier-1 suppliers, it can mobilize the awesome development resources of the entire Japanese industry for its needs."

Shie Mannor leads Technion in new
Oct 9, 2018" but on November 7, Mannor joined FORD as well to lead a new SAIPS division funded with $12.5m designing a decision making system for Ford SDV. Mannor joined SAIPS IN August 2018.

Ford bought SAIPS IN 2016
yes 2016.

Welcome to academia.

Toyota, Cortica, Perceptive Automata. SLAMcore, Renesas .......

Toyota SDV Lexus demo in late 2019, and 2020 rollout of pilot is on it's way.

Intel has a true competitor.

Intel needs to show a breakthrough in the SDV space.

Though Intel+Mobileye is not standing still, cannot imagine what a breakthrough would be in localization and mapping and data collection outside of RSS adoption.


Add new comment