Nvidia simulator and Safety Force Field and other news from GTC
Submitted by brad on Wed, 2019-03-20 10:05
Topic:
Tags:
This week I am at the Nvidia GPU Technology Conference, which has become a significant conference for machine learning, robots and robocars.
Here is my writeup on a couple of significant announcements from Nvidia -- a new simulation platform and a "safety force field" minder for robocar software, along with radar localization and Volvo parking projects.
Comments
MPH
Thu, 2019-04-25 07:05
Permalink
Toyota TRI-AD / Sotbank / NTT / Carmera / Maxar / DMP
Toyota TRI-AD / Sotbank / NTT / Carmera / Maxar / DMP PoCs and projects bypass Lidar in mapping framework.
MPH
Thu, 2019-04-25 10:36
Permalink
TRI-AD, Maxar NTT: Google watching closely?.
Google watching closely?
MPH
Thu, 2019-04-18 06:37
Permalink
Softbank, Toyota, Denso with Uber - Belmonte exit
Softbank, Toyota, Denso with Uber interest of 14 percent may incline Uber to look at Carmera and DMP. Uber’s head of visualization joins Mapbox Nico Belmonte to join as GM Maps was just announced.
MPH
Thu, 2019-04-18 06:49
Permalink
Uber was a large partner with Mapbox. Is Softbank
Uber was a large partner with Mapbox. Is Softbank looking at Mapbox?
MPH
Tue, 2019-04-23 06:40
Permalink
Sony delays 8-Mpixel Automotive CMOS Sensor
Sony delay - Wonder if Toyota, Nvidia or Intel design change requests are the primary or secondary reason.
MPH
Tue, 2019-04-16 06:26
Permalink
Maps again with cameras - Ambarella and Momenta Unveil HD Mappin
Maps again with cameras - Ambarella and Momenta Unveil HD Mapping
Using CV22AQ, Momenta is able to use a single monocular camera input to generate two separate video outputs, one for vision sensing (perception of lanes, traffic signs, and other objects), and another for feature point extraction for self-localization and mapping (SLAM) and optical flow algorithms.
So that is 4 new intros in 60 days
MPH
Wed, 2019-04-10 18:26
Permalink
blimey bloke.
blimey bloke.
Intel post from SAE World Congress today.
"While RSS was originally envisioned for AVs, we can apply it to ADAS solutions *NOW WITH IMMEDIATE IMPACT*. This is what I believe is the next revolution in ADAS."
"With a safety model that is fully measurable, interpretable and enforceable, we wondered: *WHY WAIT* for AVs to experience the life-saving benefits of this new reality? Let's find a way to allow human drivers to benefit from RSS"
Ok, for Mobileye L2++ Vision Zero ADAS first platform is a commercial facing rather than a consumer facing production model arrives first via Ford and VW. Bundling 8 cameras or more in a production vehicle is a minor commitment, and maybe a van, or truck arrives first.
MPH
Thu, 2019-04-11 10:08
Permalink
Vision Zero ADAS, driver monitoring, EU 2022 req
psuedo-autonomy = Vision Zero ADAS = driver monitoring, surround-vision, REM, RSS, forward camera. Not a revelation outside new emphasis on driver monitoring. Will Intel develop this subsystem internally, purchase technology or leave open to marketplace.
A USA based OEM to announce RSS support in 2019 for a future USA production series is a noteworthy event.
Curiously, in the new safety tech proposal to be made mandatory by the EU for vehicles by 2022 includes driver monitoring.
At WCX, Intel commenting on Pedestrian behavior, movements and trajectory stood out as an intentional nuance as does no comments on driver monitoring.
Europe 2022
For cars, vans, trucks, and buses: warning of driver drowsiness and distraction (e.g. smartphone use while driving), intelligent speed assistance, reversing safety with camera or sensors, and data recorder in case of an accident (‘black box’).
For cars and vans: lane-keeping assistance, advanced emergency braking, and crash-test improved safety belts.
For trucks and buses: specific requirements to improve the direct vision of bus and truck drivers and to remove blind spots, and systems at the front and side of the vehicle to detect and warn of vulnerable road users, especially when making turns.
Anonymous
Mon, 2019-04-08 14:23
Permalink
Hyundai Mobis going in-house
Hyundai Mobis going in-house has to create problems in Israel
Hyundai Mobis, announced during its meeting at the KINTEX Seoul Motor Show that the company would be the first in Korea to secure the global-calibre ‘deep learning-based high-performance image recognition technology’ for recognising vehicles, pedestrians and geographical features of the road by the end of the year, and begin to mass-produce front camera sensors supporting autonomous driving in 2022.
The ‘deep learning-based image recognition technology' consists of ‘image recognition artificial intelligence' that uses the automation technique to learn image data. If Hyundai Mobis acquires the technology this year, the company will possess most of the software and hardware technologies applied to autonomous driving camera sensors. In particular, it is planning to elevate the object recognition performance, which is the essence of the image recognition technology, to a level equal to that of global leaders.
“The deep learning computing technology, capable of learning a trillion units of data per second, is greatly improving the quality and reliability of image recognition data,” said Mr. Lee Jin-eon, Head of the Autonomous Driving Development Department of Hyundai Mobis, at this meeting. He added, “The amount of manually collected data used to determine the global competitiveness of autonomous driving image recognition, but not anymore.”
To apply the deep learning technology to cameras, Hyundai Mobis will also reinforce collaboration with Hyundai Motor Company. The company is planning to apply the deep learning-based image recognition technology not only to the front camera sensors for autonomous driving, but also to the 360° Surround View Monitor (SVM) through joint development with global automakers.
If the image recognition technology for detecting objects is applied to the Surround View Monitor, which has been used for parking assistance, automatic control will become possible, involving emergency braking to prevent front and broadside collisions during low-speed driving. Hyundai Mobis is planning to secure differentiated competitiveness in cameras and diversify its product portfolios by expanding the application of the image recognition technology.
In addition, the company will combine this image recognition technology with the radar that it already proprietarily developed, enhance the performance of sensors through data convergence (sensor fusion) between cameras and radars, and enhance its technological competitiveness in autonomous driving.
To this end, Hyundai Mobis doubled the number of image recognition researchers in the technical centres in Korea and abroad over the past 2 years. Currently, it will increase the number of test vehicles, used exclusively for image recognition, among the 10 or more ‘M.Billy' autonomous driving test cars, which Hyundai Mobis is operating around the world, from 2 to 5 by the end of this year. The company is also planning to increase investment in related infrastructure by 20% each year.
Anonymous
Mon, 2019-04-08 16:41
Permalink
Hyundai Mobis seems intentionally allusive in this announcement
Hyundai Mobis seems intentionally allusive in this announcement, though most industry veterans could figure it out after several conversations with others.
MPH
Mon, 2019-04-08 17:54
Permalink
The description is indeed confusing
The description is indeed confusing.
Alluding to "joint development with global automakeres" stands out as peculiar if the technology was physically part of a sensor or compute platform. Carmera or Toyota developing a custom ASIC or custom FPGA infused with sw does not sound reasonable.
MPH
Mon, 2019-04-08 18:06
Permalink
And a VPU processes and does not "learn"
And a VPU processes and does not "learn"
MPH
Tue, 2019-04-09 06:09
Permalink
Scanning one of the better
Scanning one of the better sources for image sensors called Image Sensors World generates no real insight.
MPH
Tue, 2019-04-09 10:20
Permalink
ODaaS '- Google Inception. Amazon Deeplens. IBM PoweAI Vision
Description does not fit Allegro.ai. A neophyte like myself imagines IBM PowerAI Vision for Automotive as an exemplary fictious reference model and wonder is this a translation issue. For that matter, Amazon DeepLens or Google Inception are similar.
The news release makes no sense, especially given the technology appears not ready to ship to customers yet.
And because of economics, an edge computing addition makes no sense. The compute platform handles ADAS and SDV host algorithms.
Marrying an AI image detection framework onto a neural net accelerator platform and automating object detection does warrant excitement.
Bundling object detection and image recognition as a plug-in algorithm in a camera sensor package is not what one would interpret from news release.
ODaaS '- Google Inception. Amazon Deeplens. IBM PoweAI Vision
Anonymous
Tue, 2019-04-09 12:22
Permalink
close to a million cars funneling mapping data Mobil
Mobileye told VentureBeat that close to a million cars are funneling mapping data back to Mobileye’s cloud platform, in addition to 20,000 aftermarket units. Jan 2019
If the average driver drove a 1000 unique miles per year, that is a billion miles of mapped motorways.
But what good is it just sitting there?
Does the BMW and GM blockchain interest in the Mobility Open Blockchain Initiative (MOBI) save Intel SDV program in time.
Or does Toyota change things?
Anonymous
Tue, 2019-04-09 13:22
Permalink
Only 100 on the L4 SDV team?
Speaking of Mobileye's biggest advantage, Tong Lifeng believes that it is the core algorithm of machine vision. There are more than 800 algorithm teams in Israel.
In Israel, there is a 100 L4 self-driving team. source EEAsia
Only 100 on the SDV AV team?
Wonder how many at Aurora or Waymo
Always useful to see headcount comparison, even if skewed.
Toyota is 1900+ in TRI group.
MPH
Tue, 2019-04-09 18:22
Permalink
Aurora now has about 200 employees
Aurora now has about 200 employees per reCode
MPH
Mon, 2019-04-15 15:56
Permalink
China Intelligent Driving Lab stat on intersections
No breakthroughs here but "intersection detection" could become part of a presentation by the Vision Zero ADAS AEB / LKA improvement.
ICRI-IACV
Intelligent Driving Laboratory: Emphasis on learning from traffic accidents, simulation and safety verification.
The Intelligent Driving Laboratory was established in early 2018. It is dedicated to the research of key technologies of automatic driving, and the deep integration of automatic driving technology with intelligent transportation, intelligent city and other future directions, and the end-to-end technology research that promotes each other.
Wu Xiangbin, Director of Intel China Research Institutes Intelligent Driving Laboratory
According to Wu Xiangbin, director of the lab, the current research in the laboratory emphasizes learning about driving accidents, and through automatic deep accident analysis, automatic scene reconstruction and key scene library generation, and inheriting with the autopilot vehicle simulation tool, accelerate the performance simulation iteration and safety verification of the autopilot algorithm, and finally draw inferences.
In the aspect of road simulation for automatic driving, the laboratory has built random test for routine scene, key scene test and adaptability test for high-risk area. The test variables include extreme weather, lighting conditions, environmental visibility, road geometry characteristics and other combination conditions. In addition, Intel has sponsored an open source autopilot simulator.
In the aspect of vehicle-road coordination, traffic efficiency and safety are improved through intelligent real-time and full-view video analysis of key traffic scenarios. At the same time, Wu Xiangbin said that at present, the incidence of traffic accidents at intersections accounts for 50% of the worlds traffic accidents, so the laboratory will start from the intersection to study the construction of intelligent traffic intersections.
At the same time, the Intel Intelligent Network Lianhe Auto University Cooperative Research Center was officially launched in November 2018. As a connecting center, the Intelligent Driving Laboratory will cooperate with six research teams including Tsinghua University, the Institute of Automation of Chinese Academy of Sciences and Tongji University in the next three to five years to promote the large-scale practical deployment of automatic driving. This centre will study security, open data sets, human-machine interface/regulation and policy, advanced algorithms and architectures, networked vehicles and intelligent infrastructure.
MPH
Wed, 2019-05-08 21:28
Permalink
Mobileye "24 mil cars mapping by 2022" and new OEMs
Mobileye will have "24 million cars
mapping and sending data by 2022".
In addition, more possible as "number of OEM contracts in pipeline" (from today's Investor Day), as 24 million does not include new OEM pipeline.
Note that at OS kickoff this week in UK, interviews state several million already. So in 2 1/2 years, minimally adding an additional 22 million vehicles with REM harvesting capability for mapping from current contracts.
Shashua officially announces Ford with Nissan, BMW, and VW.
Also, Intel going "all in" in TaaS as a "Full Service Provider" for global RoboTaxi market. Intel building a "full stack MaaS" global solution.
Some slides as well.
Investor Day video
["https://youtu.be/V55ZhZJ2FFM"]
Anonymous
Mon, 2019-05-13 11:52
Permalink
24M vehicles by 2023, not 2022
REM contracts with OEMs- over
24M vehicles by 2023, not 2022.
source: press kit pdf
MPH
Tue, 2019-04-09 14:23
Permalink
Hyundai Mobis, StradVision, Hwang Jae-ho
The riddle is solved.
StradVision, which possesses deep learning-based camera image detection technology per Hwang Jae-ho
All the information can be referenced back to May 2018 when the radar projects started. Very detailed background information if you look
Use both headline for details
Hyundai Mobis Aims to Develop All Autonomous Driving Sensors by 2020
Hyundai Mobis Invests in an AI Sensor Startup for Developing Deep Learning Cameras9
This may explain why Intel EyeC radar group started hiring in August 2018.
MPH
Thu, 2019-04-18 06:02
Permalink
Intel EyeC radar
Is EyeC radar a 4D imaging RADAR like Arbe ?
Does the talent wars cause the design of mmwave 4D imaging RADAR to linger like 5G smartphone modems.
MPH
Tue, 2019-04-09 14:23
Permalink
Hyundai Mobis, StradVision, Hwang Jae-ho
The riddle is solved.
StradVision, which possesses deep learning-based camera image detection technology per Hwang Jae-ho
All the information can be referenced back to May 2018 when the radar projects started. Very detailed background information if you look
Use both headline for details
Hyundai Mobis Aims to Develop All Autonomous Driving Sensors by 2020
Hyundai Mobis Invests in an AI Sensor Startup for Developing Deep Learning Cameras9
This may explain why Intel EyeC radar group started hiring in August 2018.
MPH
Tue, 2019-04-09 15:08
Permalink
If Daimler and not BMW, did Bosch lose out
More inclined to believe Daimler and not BMW, and Bosch loses out.
If a combined camera and radar suite
is key to success as Mobis states, Intel motivation for developing EyeC Radar could either be economic or performance optimization.
"can implement optimal performance for autonomous driving only by securing all of the three technologies (perception, decision and control)"
StradVision has won two orders from a Tier-1 company to supply StradVision’s object detection software for a premium German automotive manufacturer
Mobileye design or program wins in 2023-2024 will take a punch given likely losing design wins to Hyundai Mobis.
MPH
Fri, 2019-04-12 15:31
Permalink
NEW Google Cloud simulation for CruiseSDV
April 11. 2019
"How to Run Millions of Self Driving Car Simulations on GCP"
on YouTube
Anonymous
Sat, 2019-08-24 18:35
Permalink
toyota vision issues
hard to swallow
http://image-sensors-world.blogspot.com/2019/08/chesearchinchina-on-automotive-vision.html?m=1
MPH
Wed, 2019-04-03 11:38
Permalink
CIFUS objected to Navinfo
CIFUS objected to Navinfo investing in HERE, so Toyota TRI-AD's
Autonomous Mapping Platform AMP ambition to open source an HD map is great fodder for conspiracists.
For that matter, how and when Waymo, the German auto industry, GM, Apple, Intel+Mobileye, or Tesla react is hard to predict.
MPH
Sun, 2019-04-07 13:29
Permalink
SLAMcore and Perceptin.ai
Quote about TRI-AD in interview in March 2019
"It is well known that Toyota is developing an open source autonomous driving HD map with the intention of grabbing 20 billion Markets"
SLAMcore or Perceptin.io founder Shaoshan Liu have been in VIO and V-SLAM space for some time. Is Intel Realsense r&d also in agreement with possible disruptive capability.
If the OEMs knew, where is the press on this story?
markets
MPH
Sun, 2019-04-07 15:33
Permalink
SLAMcore explains v-slam
"The majority of modern visual SLAM systems are based on tracking a set of points through successive camera frames and using these to triangulate their 3D position; while simultaneously using estimated point locations to calculate the camera pose that could have observed them. If computing position without first knowing location initially seems unlikely, finding both simultaneously looks like an impossible chicken and egg problem. In fact, by observing a sufficient number of points it is possible to solve for both structure and motion concurrently. By carefully combining measurements of points made over multiple frames it is even possible to retrieve accurate position and scene information with just a single camera."
MPH
Sun, 2019-04-07 17:45
Permalink
GNSS
v-slam still needs gnss to initialize with so Toyota TRI-AD needs tech, as well as OTA tech. Not sure if Mobileye 8 Connect has GNSS tech inside, though it has some type of wireless/OTA tech to expedite REM collection transfer. Intel has both GNSS and modems in in-house portfolio though. Not sure how much of 8 Connect is Intel inside though.
Anonymous
Mon, 2019-04-08 08:48
Permalink
I wonder how many domain
I wonder how many domain experts from the auto industry work in Israel for Intel.
/ The Intel Capital president said the semiconductor giant will continue competing with Nvidia in Israel as it makes strides in autonomous vehicle technology led by Mobileye.
"We are well ahead of everyone else, and though we may not have fully autonomous cars until at last 2021, we will get there first,” Brooks said.
From 2019 Intel Capital Global Summit on April 1, 2019.
MPH
Wed, 2019-04-10 13:19
Permalink
FORD joins Mobileye REM crowd-sourcel
"Bloomberg quotes Erez Dagan, executive VP of strategy for Intel's Mobileye unit, as having said at conference in Detroit that Ford has signed on to join Mobileye's Road Experience Management, or REM, platform."
SAE World Congress reporters could pry out more from Intel this week.
VW and Ford could be ironing out another partnership beyond vans and trucks, or even Argo.ai.
Mobileye China presentation in recent EEAsia article shows Ford as well, so it could be a China arrangement.
Comments from Argo.ai or Civil Maps may not be proper here so do not expect any. Volkswagen and Ford are still in talks per Ford CEO Hackett yesterday.
The CES 2019 slide number #7 could be a new VW project or a reference to Isreal project.
MPH
Wed, 2019-04-10 13:52
Permalink
Ford CTO Ken Washington interviewing Mobileye
Today's Ford CTO Ken Washington interview of Mobileye at SAE is available as a video replay.
With the re-alignment today of Ford
With Jim Farley as president, new businesses, technology & strategy starting May 1, petsonally do not expect much until then for legal reasons.
Anonymous
Thu, 2019-04-11 14:29
Permalink
Will EyeC engineering samples
Will EyeC engineering samples rollout coincide with EyeQ6
EyeQ6 2022 production estimate
unveil / fab prod / auto prod modyr
Eyeq4 Mar,2015 fp Q22016 ap Q1 2018
Eyeq5 Jan,2018 fp Jan 2019 ap Q1 2020
Eyeq6 est Q42019 est Q32020 estQ12022
Anonymous
Thu, 2019-04-11 15:07
Permalink
Does EyeQ5 go product H1 of
Does EyeQ5 go product H1 of next year? Previous Post is typo error.
Bit of stretch so I doubt it.
H2 likely given SDK and on-die Atom.
MPH
Fri, 2019-04-12 09:23
Permalink
domain experts
"Toyota's budget and technology resources are inexhaustible, and it resolutely refuses to adopt Mobileye's technology. Toyota has always claimed that it could achieve better results independently than those of Mobileye, and therefore has no need to tie itself to the Israeli company's closed system. Since Toyota has close ties with a number of Japanese tier-1 suppliers, it can mobilize the awesome development resources of the entire Japanese industry for its needs."
Anonymous
Thu, 2019-04-11 18:15
Permalink
SAIPS / FORD / INTEL / Shie Mannor
Shie Mannor leads Technion in new
"TECHNION AND INTEL JOINT CENTER FOR ARTIFICIAL INTELLIGENCE founded on
Oct 9, 2018" but on November 7, Mannor joined FORD as well to lead a new SAIPS division funded with $12.5m designing a decision making system for Ford SDV. Mannor joined SAIPS IN August 2018.
Ford bought SAIPS IN 2016
yes 2016.
Welcome to academia.
Anonymous
Fri, 2019-04-12 06:58
Permalink
Toyota, Cortica, Perceptive Automata. SLAMcore
Toyota, Cortica, Perceptive Automata. SLAMcore, Renesas .......
Toyota SDV Lexus demo in late 2019, and 2020 rollout of pilot is on it's way.
Intel has a true competitor.
Anonymous
Fri, 2019-04-12 07:19
Permalink
Intel needs to show
Intel needs to show a breakthrough in the SDV space.
MPH
Fri, 2019-04-12 14:36
Permalink
saving lives and preventing accidents
Though Intel+Mobileye is not standing still, cannot imagine what a breakthrough would be in localization and mapping and data collection outside of RSS adoption.
MPH
Fri, 2019-04-12 17:27
Permalink
ZF coPILOT debut today
2019-Apr-12
ZF coPILOT debut today, available 2021. Needs 2 winters of testing?
MPH
Sat, 2019-04-13 08:55
Permalink
free addition of Intel APB added to AEB is
Not a breakthrough. but free addition of Intel APB added to AEB will be bundled with tricam.
Intel Newsroom
Search Newsroom...
shashua eyeq5 2x1
Using Autonomous Vehicle Technology to Make Roads Safer Today
Technologies Developed for Fully Autonomous Vehicles Can Improve the Advanced Driver Assistance Systems Already in Wide Use
Editorial
January 8, 2019
Amnon ShashuaBy Professor Amnon Shashua
Safety has always been our North Star. We view it as a moral imperative to pursue a future with autonomous vehicles (AV), but to not wait for it when we have the technology to help save more lives today.
We fundamentally also believe that everything we do must scale, and we constantly search for the best ways to match our technology to market needs. Founded on the idea that we could use computer vision technology to help save lives on the road, Mobileye became a pioneer in advanced driver assistance systems (ADAS). These capabilities are now scaling up to become the building blocks for a fully autonomous vehicle.
More: Intel at CES 2019 | Autonomous Driving at Intel | Mobileye News
The same is also true in reverse. New technologies developed specifically for AVs are enabling greater scale of advanced driving assistance systems and bringing a new level of safety to roads.
AV Technology Raises ADAS to the Next Level
There are five commonly accepted levels of vehicular autonomy. (Zero is no autonomy.) ADAS systems fall into levels 1 and 2, while levels 3 to 5 are degrees of autonomy ranging from autonomy in some circumstances to full autonomy with no human intervention.
While level 1 and 2 cars can be bought today, cars with varying degrees of autonomy are still in development. We know self-driving cars are technically possible. But the true challenge to get them out of the lab and onto the roads lies in answering more complex questions, like those around safety assurance and societal acceptance. To that end, we have been innovating around the more difficult enablers of AV technology such as mapping and safety.
This technology envelope that we’ve designed around the AV will take ADAS to the next level.
At Mobileye, we developed Road Experience Management™ (REM™) technology to crowdsource the maps needed for AVs – what we call the global Roadbook™. We are now harnessing those maps to improve the accuracy of ADAS features. An example of this is the work that Volkswagen and Mobileye are continuing in their efforts to materialize a L2+ proposition combining the front camera and Roadbook technologies, and leveraging the previously announced data harvesting asset. The ongoing development activity is targeting a broad operational envelope L2+ product addressing mass market deployment.
Amnon Shashua Mobileye CES 2019 3
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
Professor Amnon Shashua, Intel senior vice president and preside
» Download all images (ZIP, 28 MB)
We also developed the technology-neutral Responsibility-Sensitive Safety (RSS) mathematical approach to safer AV decision-making, which is gaining traction as industry and governments alike have announced plans to adopt RSS for their AV programs and help us work toward development of an industry standard for AV safety. For example, China ITS Alliance – the standards body under the China Ministry of Transportation – has approved a proposal to use RSS as the framework for its forthcoming AV safety standard; Valeo adopted RSS for its AV program and agreed to collaborate on industry standards; and Baidu announced a successful open-source implementation of RSS in Project Apollo.
"Today, we are taking RSS technology back into our ADAS lab and proposing its use as a proactive augment to automatic emergency braking (AEB). We call this automatic preventative braking (APB). Using formulas to determine the moment when the vehicle enters a dangerous situation, APB would help the vehicle return to a safer position by applying small, barely noticeable preventative braking instead of sudden braking to prevent a collision.
If APB were installed in every vehicle using an affordable forward-facing camera, we believe this technology can eliminate a substantial proportion of front-to-rear crashes resulting from wrong driving decision-making. And if we add surround camera sensing and the map into the equation so that preventative braking can be applied in more situations, we can hope to eliminate nearly all collisions of this nature."
brad
Sat, 2019-04-13 10:34
Permalink
Press releases
Please just don't post press releases.
MPH
Sat, 2019-04-13 12:57
Permalink
Intel currently testing APB
The amount of time to develop and test APB for L2 or L2+ ADAS and incorporate technology could affect EyeQ6 feature set completion deadline.
MPH
Sat, 2019-04-13 13:15
Permalink
Mobileye’s "FPGA solutions" from Linkedin
"Autonomous Vehicle platform software team supporting Mobileye’s EyeQ and FPGA solutions"
San Jose group.
MPH
Sat, 2019-04-13 14:19
Permalink
For redundancy, EyeC radar UWB 79 GHz
EyeC is most likely a ultra-wide band 79 GHz Radar (see Yole)
" new 79 GhZ ... used for Simultaneous Localization And Mapping (SLAM) providing accurate distance information to detected objects in real time. It would be helpful to complement geo-localization technologies for autonomous driving especially in urban canyon condition where GNSS technologies show some accuracy issue. Another advantage of 79 GHz Radar is the mitigation of interference issues that could happen when the streets will be loaded with Radars embedded in the cars."
Mobileye hiring focus on "sensors" and "peripheral devices".
"integration of bleeding edge sensors and other peripheral devices into the hardware & software platform" JR0100377
Since EyeC for redundacy, and thus future breakthrough of totally separate systems, I cannot find any current breakthrough possible. RSS in ADAS L2 is all I see.
MPH
Sun, 2019-04-14 17:14
Permalink
RSS fleet testing predates May 2018 per Intel press
RSS live fleet testing predates May 2018 per Intel press. Announced in October of 2017, and research begun in mid 2016 ("we have been working on RSS 2 1/2 years"), so hard to determine how much BMW saw early on given May 2017 tie-up. Vision Zero APB appears Dec 2018, and suggested in ADAS in Jan 2019. Not sure what a breakthrough could be.
MPH
Sun, 2019-04-14 18:23
Permalink
typo JUL 2016 for BMW and RSS research
typo mistake JUL 2016 for BMW
BMW had to have advanced insight of RSS one would think.
RSS live fleet testing predates May 2018 per Intel press. Announced in October of 2017, and research begun in mid 2016 ("we have been working on RSS 2 1/2 years"), so hard to determine how much BMW saw early on given Jul 2016 tie-up. Vision Zero APB appears Dec 2018, and suggested in ADAS in Jan 2019. Not sure what a breakthrough could be.
MPH
Mon, 2019-04-15 06:36
Permalink
Monet Technologies.
Both Toyota and SoftBank's start of Monet Technologies, and Honda now on board and another pending investment in Uber could propel Waymo to new partnerships.
Google purchase of HERE instead of a HERE IPO ?
MPH
Sun, 2019-04-14 05:41
Permalink
Google Maps platform
Google Maps platform w/REM is a thought but since Intel chose AWS to host REM, end of that discussion not to mention ADAS not involved. As the map companies all jockey to be relevant, survival seems pinned to SDV. Why not tether existence to both safety and SDV but not sure how.
MPH
Mon, 2019-04-15 10:46
Permalink
Japan SIP-adus conference
Nothing in Japan SIP-adus conference notes about high-definition digital road maps of help
MPH
Tue, 2019-04-23 14:49
Permalink
what new do clients want
Sony to Delay Automotive 7.42MP Sensor Production to 2020 Nikkei reports
IMX324 had Intel input so will delay affect Intel SoC debut or production
MPH
Tue, 2019-04-16 08:42
Permalink
Airbus and ZF to develop end-to-end autonomous driving
2019 SDV arena being dominated by mapping news and suggests an industry in flux over the localization platform. Has sensor fusion become an issue? Open-source data has limits. Did ZF stumble across issues or is the flying drone world entering into the field?
“ZF AD Environment”
Airbus provides its unique high precise Ground Control Points (GCPs),
serves as independent data source to improve and validate accuracy, based on aerial and space borne approach, to complement ZF semantic cards and will be integrated as foundation layers into the “ZF AD Environment” –
the “ZF AD Environment” – an enhanced HD maps solution ZF will
present soon – where all needed information for autonomous driving will be implemented in a cloud based system.
MPH
Tue, 2019-04-16 14:09
Permalink
How does Bosch respond to countrymen ZF"s plans
With ZF well along with “ZF AD Environment” and COPilot, and surely Nvidia Mapstream, how does Bosch respond?
MPH
Tue, 2019-04-16 17:58
Permalink
BMW CFO No Plans to Develop Compact Vehicle With Rival
Nicolas Peter said at the Shanghai Autoshow No Plans to Develop Compact Vehicle With Rival pouring cold water on rumors that BMW was about to deepen its alliance with Daimler.
“We have no plans to develop a smaller car together with a German competitor,” Peter said at the Shanghai Autoshow
MPH
Wed, 2019-04-17 14:07
Permalink
Apple'xs Lidar leak ambitions almost sound
Apple's Lidar leak ambitions almost sound reactionary [seeking "revolutionary design" and form factor worries in 2019].
MPH
Tue, 2019-04-16 15:41
Permalink
Jan 28 2019 PoC / Project proposal by Airbus
Radar imaging satellites like TerraSAR-X are able to acquire images having very high absolute geo-location accuracy, due the availability of precise orbit information. By using multiple stereo images in a radargrammetric process, so-called Ground Control Points (GCPs) can be extracted. GCPs are precisely measured land marks given the exact position on the earth. These GCPs are derived from pole-like structures along the road e.g. street lights, signs or traffic lights, since these objects are having a high backscatter in the radar image and therefore being easily identifiable in multiple images. By using a stack of multiple TerraSAR-X images, a dense point cloud of GCPs having an accuracy of less than 10 centimeters can be automatically extracted.
However, in order to make use of this high positional accuracy for the use case of autonomous driving, the link between landmarks like street lights identified from mobile mapping data and the coordinates of the respective GCP needs to be established. The goal of this project is to find and implement an algorithm for the automatic matching of 3D point clouds from GCPs extracted by radar space geodesy and in-situ LIDAR mobile mapping data derived from a car acquisition. A precise matching process would enable the generation of an accurate data basis as indispensable basis for highly automated and autonomous driving.
No where near a breakthrough.
Carmera and ZF / Airbus PoC's now seem rolled out too early for press.
MPH
Wed, 2019-04-24 05:47
Permalink
Cornell student discovery may surprise Lidar industry
[/"https://arxiv.org/pdf/1812.07179.pdf"/]
"Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving."
MPH
Sat, 2019-04-27 14:43
Permalink
CEO at talk about
Intel ME CEO at March 2019 MIT talk
re RSS "so we basically remove the decisions outside of the statistical machine-learning domain"
Jerusalem
/Second, it has lots and lots of pedestrians. All the ultra-orthodox there and they don't respect the road, so they'll move in and out. So moving in narrow streets. So right now, we are 90% of the type of scenarios that we want to be in. We have another about four or five months to finish that. Then the car would be handling the most difficult scenarios a car could ever handle. So some of those scenarios would be even scary for a human driver. If you come and rent a car and drive in those areas that we drive, at some point, you'll simply stop the car and get out of the car. So it's really scary.
... Level 4 can include also, in case you see someone like a policeman waving a traffic sign or something like that, you stop, and then a teleoperator makes the decision. Some are building teleoperations where you have one teleoperator per 10 vehicles. Because the teleoperator is not supposed to drive the vehicle, or avoid accidents...Teleoperator, when the car stops, asking for guidance, then the teleoperator gets into action.
So in a level 4, you have this flexibility to handle these edge cases through a teleoperation. Level 5, I think will become much, much later. I think level 4 will take us at least a decade of maturing, practicing, going from one teleoperator for 10 vehicles to one to 100 vehicles at the end of the maturity cycle. And then you can let go of teleoperators at all for level 5. Yeah?
then the jump is going to be to level 4 or 5. And level 4 or 5 is going to start purely robo-taxi, not passenger cars, because of cost. With robo-taxis, you can build a business model of ride hailing. And we did all the math, all the business calculations, because we are going to do this in Tel Aviv as a commercial business. You can have a system that costs tens of thousands of dollars on top of the cost of the car and still make a very profitable business of ride hailing because there's no driver in the loop
AMNON SHASHUA: Sorry. When I mentioned--the purpose of the teleoperator is not to avoid accidents. With all the stuff that I talked about in terms of validating the perception, validating the driving policy, the autonomous car should never cause an accident. The teleoperator gets engaged when the car stops and is confused of what to do. There are multiple choices, and the human operator tells the car what choice to take, or tell the car, stay there and I'll send some help to take you off the road./
MPH
Thu, 2019-05-02 12:54
Permalink
Hail FLA w/SB 932 passes, thunderstorm clouds brewing
Hail FLA w/SB 932 passes, thunderstorm clouds brewing.
redundancy required for an ASIL-D system. Up to date maps for FLA a task.
Anonymous
Mon, 2019-05-13 10:10
Permalink
Foretellix
Foretellix will integrate an open-source version of Intel RSS Responsibility-Sensitive Safety into its software
MPH
Wed, 2019-05-15 05:45
Permalink
Bill Ford Jr. Heads To Israel Looking For New Tech
Among others, Ford will meet with representatives from cybersecurity for autonomous vehicles company Karamba Security Ltd. and automotive chipmaker Mobileye.
Late last year, Ford Motor invested $12.5 million in its Israeli subsidiary, SAIPS AC Ltd., to establish a new unit that will focus on designing a decision-making system for autonomous vehicles.
MPH
Tue, 2019-05-21 20:39
Permalink
Ford tech event in Tel Aviv on June 12 states company
Ford spokesperson said that the company will hold an event in Tel Aviv on June 12, and that more information will be available around that time.
Pages
Add new comment