Uploaded by wen hu

Mobileye-CES-2020-presentation

advertisement
CES 2020
Engines Powering L2+ to L4
Mobileye in Numbers
EyeQ Shipped
54M
Over
EyeQs shipped to date
46
% CAGR
In EyeQ shipping since 2014
33
In 2019:
Design
Wins
28M units over life
4 high-end L2+ wins with 4
major EU and Chinese OEMs
47
Running
Programs
Globally across 26 OEMs
16
Product
Launches
Industry first 100° camera with Honda
VW high-volume launch (Golf, Passat)
Mobileye Solution Portfolio
Covering the Entire Value Chain
2025
2022
Today
Today
L4/ L5
Mobility-as-a-Service
L2+/ L2++
L1-L2 ADAS
Driver assistance
Front camera SoC & SW:
AEB, LKA, ACC, and more
Conditional Autonomy
Scalable proposition for
Front vision sensing
Full Autonomy
Full-Service provider-owning
the entire MaaS stack
REM HD map
SDS to MaaS operators
Driver monitoring, surround
vision, redundancy
SDS as a Product
May also include:
“Vision Zero”- RSS for
ADAS
Data and
REM® Mapping
Crowdsourcing data from ADAS for
HD mapping for AV and ADAS
Providing smart city eco system with Safety/Flow Insights and foresights
L3/4/5
Passenger cars
Consumers Autonomy
SDS to OEMs
Chauffeur mode
Scalable robotaxi SDS design for
a better position in the privately
owned cars segment
The ADAS Segment
Visual perception
Evolution
L2+ - The Next Leap in ADAS
The opportunity
L2+ global volume expectation (M)
Source: Wolfe research, 2019
L2+ common attributes
Multi-camera sensing
HD maps
Multi-camera front
sensing to full surround
to
R
AG
C
63%
3.6
L2+ functionalities range from
Everywhere,
all-speed
lane centring
13
L2+ - significant added value in comfort, not only safety
Everywhere,
all-speed
conditional handsfree driving
Higher customer adoption and willingness to pay
Significantly higher ASP- 3-15X more than legacy L1-L2
System complexity leads to high technological barrier
Mobileye Scalable Solution for L2+
Camera-based 360° sensing is the enabler for the next leap in ADAS
360° cameras sensor suite
Affordability allows mass
adoption in ADAS
Full 3D environmental model
Algorithmic redundancy
Lean compute
platform
REM™ HD Maps
First in the industry to offer:
“HD Maps Everywhere”
High refresh rate
Entire system running on 2x
EyeQ® 5H
3rd
party programmability
46 TOPS, 54W
Driving Policy layer
RSS-based
Formal safety guarantees
Prevention driven system for
ADAS
L2+ Business Status
More than 70% of the L2+ systems running today are powered by Mobileye’s technology
For example:
Nissan ProPilot ™ 2.0
VW Travel Assist ™
Cadillac Supercruise ™
Additional 12 active programs with L2+ variants and 13 open RFQs
BMW KaFAS 4
Next Generation ADAS
Unlocking “Vision Zero” with RSS for Humans Drivers
ADAS Today
ADAS Future Potential
AEB, LKA | Emergency driven
ESC/ ESP | Prevention driven
AEB, LKA, ESC | All in one
Application of brakes
longitudinally & laterally
Prevention driven system
Formal Guarantees
Vision Zero
Scalable surround
CV system
RSS Jerk-bounded
braking profile
longitudinal & lateral
Standard fitment/
Rating
Under the Hood of Mobileye’s
Computer Vision
The Motivation
Behind
Surround CV
The
goal
Why
Full stack camera only AV
10−4MTBF for sensing mistake leading to RSS violation
(per hour of driving)
~ 10−4 Humans probability of injury per hour of driving
~ 10−6 Humans probability of fatality per hour of driving
sensing system desired MTBF (with safety margins)
~10−7 The
Driving 10M hours without a safety critical error
To meet the 10−7MTBF, we break it down into two independent sub-systems:
MTBF 107 ≈ MTBF1103.5 ⋅ MTBF2103.5
Critical MTBF of 104 ≈ 10,000 (with safety margins) hours is plausible.
The
challenge
10−4 MTBF
still requires an extremely powerful surround vision
Equivalent to driving 2 hours a day for 10 years without a safety critical sensing mistake
Mobileye’s Sensing has Three Demanding Customers
Sensing state for Driving Policy
under the strict role of
independency and redundancy.
Smart agent for harvesting,
localization and dynamic
information for REM based map
ADAS products working
everywhere and at all conditions
on millions of vehicles
Comprehensive CV
Environmental Model
Four General Categories
Road Semantics
Road-side directives (TFL/TSR),
on-road directives (text, arrows,
stop-line, crosswalk) and their
Driving Path (DP) association..
Road Boundaries
Any delimiter/ 3D structure/
semantics of the drivable area, both
laterally (FS) and longitudinally
(general objects/debris).
Road Users
360 degrees detection of any
movable road-user, and actionable
semantic-cues these users convey
(light indicators, gestures).
Road Geometry
All driving paths, explicitly /
partially / implicitly indicated, their
surface profile and surface type.
Redundancy in
the CV Subsystem
In order to satisfy an MTBF
of 10−4 hours of driving of
the CV-Sub-system:
This creates internal redundancy layers for both detection and measurements:
Appearance
based
Geometry
based
Appearance
based
Geometry
based
Multiple independent CV engines
overlap in their coverage of the four
categories
Measurements
Detection
2D
Robust Sensing
3D
Object Detection
Generated and solidified using 6 different engines
Scene segmentation (NSS)
VIDAR
Detection
3DVD
Full image detection
Wheels
Top View FS
2D to 3D Process
Generated and solidified using 4 different engines
VIDAR
Visual road model
Measurements
2D
Range Net
3D
Map world model (REM)
Full Image Detection
Two dedicated 360-stitching engines for completeness and coherency of the unified objects map:
Vehicle signature
Very close (part-of) vehicle in field of view: face & limits
Front right cam
Rear right cam
Front right cam
Rear right cam
Inter-cameras tracking
Object signature network
Range Net
Metric Physical Range estimation
dramatically improve measurement
quality using novel methods
Range
Range net output
Traditional classifier output
Frame
Pixel-level Scene
Segmentation
Redundant to the object-dedicated networks
Catches extremely-small visible fragments of road users;
Used also for detecting “general objects”.
Surround Scene
Segmentation with
Instance
Front left
Front right
Rear left
Rear right
Road Users – open door
Uniquely classified , as it is both extremely common, critical,
and of no ground intersection
Road Users - VRU
Baby strollers and wheel chairs are detected through
a dedicated engine on top of the pedestrians
detection system
Parallax Net
Parallax Net engine provides accurate structure understanding by assessing
residual elevation (flow) from the locally governing road surface (homography).
VIDAR
“Visual Lidar”: DNN-based Multi-view Stereo
Redundant to the appearance and measurement engines
handling “rear protruding” objects – which hover above the
object’s ground plane.
VIDAR Input
Front left
Rear left
Front right
Main
Parking left
Parking right
Rear right
VIDAR Output
DNN based multi-view stereo
VIDAR Output
DNN based multi-view stereo
Road Users from VIDAR
Leveraging Lidar Processing Module for Stereo
Camera Sensing – “VIDAR”
Dense depth image from VIDAR
High-res Pseudo-Lidar
Upright obstacle ‘stick’
extraction
Object detection
Obstacle
Classification
Obstacle classification
e.g., how to differentiate a double
parked car from a traffic jam
Using cues from the
environment
•
•
•
•
•
Behavior of other road users
What’s in front of the object
Object location
Opened door
Emergency lights
Visual perception
Road Users Semantics
Head/pose orientation
Pedestrians posture/gesture.
Vehicle light indicators
Emergency vehicle/Personnel classification.
Emergency vehicle , light indicators
Pedestrian understanding
Road Users Semantics
Pedestrian Gesture Understanding
Come closer
You can pass
Stop!
On the phone
The full unedited 25min ride is available
at Mobileye’s YouTube Channel
https://www.youtube.com/watch?v=hCWL0XF_f8Y&t=15s
REM Mapping and Data
REM Process
1. Harvesting
2
Collecting road and
Anonymizing and
landmarks through
EyeQ-equipped vehicles
encrypting REM data
3. Aggregation
Generating HD
crowdsourced
RoadBook for
autonomous driving
Also available via retrofit solutions
4
5. Localizing
Map tile distributed
Localizing the car
to the car
within 10cm accuracy
in the road book.
REM Volumes
Harvesting agreements with 6 major car makers
Harvesting:
Over 1M Harvesting vehicles in EU by 2020
Harvesting volumes
Over 1M Harvesting vehicles in US by 2021
Collecting 6 million km per day from
14M
serial production vehicles
12M
such as:
10M
Volkswagen Golf, Passat, BMW 5 series, 3 series, Nissan Skyline, and more
8M
Localization:
3 additional major OEMs
Programs for using Roadbook™ for L2+:
6M
4M
1M
2 OEMs
2 OEMs
2 OEMs
2018
2019
2020
2021
2022
REM-data Aggregation
RSD Coverage Global Snapshot
REM Milestones
Mapping all of Europe by Q1 2020
Mapping most of the US by end
2020
REM for Autonomous Driving
Already operational and is proving
to be a true segment game changer
45
For roads above 45 Mph
Maps created in a fully automated
process TODAY
Contains all static, dynamic, and semantic
layers to allows fully autonomous drive
Mph
For roads below 45 Mph
Semi-automated process
Full automation in 2021
Las Vegas Fwy, interstate 15 REM map
REM in China
Data harvesting agreements in China complying with regulatory constraints
Strategic collaboration with SAIC Motor for REM data harvesting
Accelerate the AV development for passenger vehicles in China
Harvesting data in China as part of a collaboration with NIO on L4
synergy for Robotaxi and consumer AV
JV agreement with Unigroup to enable the collection, processing,
and monetization of data in China
The Smart Cities
Opportunity
Mobileye Data Services
Product Portfolio
Infrastructure
Asset Inventory
Automated, AI-powered
road asset surveying
Efficient asset management,
precise GIS data and change
detection
Strategic collaboration with
Ordnance Survey (UK)
Pavement Condition
Assessment
Automated surveying &
assessment of road conditions
Efficient road maintenance
with precise GIS data of
surface distress
Dynamic Mobility
Mapping
Near real-time & historical
data on movement in the
city; dynamic mobility GIS
datasets
Evidence-based urban
planning improvements
Infrastructure Asset Inventory
Pavement Conditions Assessment
5 levels score
0 – Excellent conditions - requires no repair
Road Conditions Score – Poor (5)
Pavement Conditions Assessment
Cracks and potholes harvester in action
Road Conditions Score – Poor (5)
Pavement Conditions Assessment
Cracks and potholes harvester in action
Road Conditions Score – Poor (5)
Pavement Conditions Assessment
Cracks and potholes harvester in action
Visual perception
RSS Driving Policy and Driving Experience
The Driving Policy Challenge
•
Do we allow an accident due to a “lapse of judgement” of Driving Policy?
•
Should the occurrence of “lapse of judgement” be measured statistically?
Safety is a technological layer living outside of Machine
Learning. It is like “Ethics” in AI - a set of rules.
•
It all boils down to a formal definition of “what it means to be careful”
There is a need for “regulatory science and
innovation”. Technological innovation is not sufficient.
What is RSS?
A formal model for safety, that provides
mathematical guarantees for the AV to never
cause an accident
http://arxiv.org/abs/1708.06374
The Method
01
Defining reasonable boundaries on the
behavior of other road users
02
Within the boundaries specified by RSS, one
must always assume the worst-case behavior
of other agents
03
The boundaries capture the common sense of
reasonable assumptions that human drivers
make
04
Any action beyond the defined boundaries is
not reasonable to assume
For Example
Ego car A is following car B on a single-lane straight road
A
The Goal
The Implementation
The Policy
The Guarantees
B
Efficient policy for A that guarantees not to hit B in the worst-case
Safe distance for A to not hit B in the worst-case – under a
reasonable assumption on V b max brake
Define Dangerous Situation- a time is dangerous if the distance is non-safe
Define Proper Response- as long as the time is dangerous, brake until stop
Proof by induction
More complex situations (n agents) need to prove “no conflicts” (efficiently verifiable)
More Complex Situations
RSS sets the boundaries of reasonable assumptions for all driving scenarios
What is reasonable to assume on B in the scenarios below
Multiple Geometry
Lateral Maneuvers
If B can brake at Bmin brake
without violating right-of-way, B will
brake, otherwise A must stop
If B can brake at B lat min brake ,
B will brake laterally, otherwise
A must brake laterally
Occlusions
Assuming the max velocity of B dictates the max speed for A
B
B
A
A
A
B
In Summary
Assuming cooperative behavior on the
roadway is the key for drivability and
“human-like” driving
Formal definition of the “reasonable
assumptions” provides mathematical
guarantees for safety
The parameters dictates the
cautiousness and utility tradeoff and
allow transparent and concise
regulatory framework
The RSS adheres to 5 principles:
01
02
03
04
05
Soundness- full compliance with common sense
of human driving
Completeness- covering all driving scenarios by always
assuming the worst case under the reasonable assumptions
Usefulness- Policy for efficient and not overlyconservative driving
Transparency- The model should be a white-box
Efficiently Verifiable- proof of guarantee by
induction, insuring no butterfly effect
Industry Acceptance
The RSS is gaining global acceptance as an Automated Vehicle Safety Standard
Previously announced adoptions
of RSS:
Safety First for Automated Driving
(SaFAD)
IEEE to define a formal model for
AV safety with Intel-Mobileye
leading the workgroup
Companies involved are:
BMW, Daimler, Audi, VW, FCA, Aptiv,
Continental, here, Baidu, Infineon
Together with 11 industry leaders, we
established an industry-wide
definition of safety with the SaFAD
white paper, based on RSS definitions
The new standard will establish a
formal mathematical model for safety
inspired by RSS principles
Industry Acceptance
The RSS is gaining global acceptance as an Automated Vehicle Safety Standard
China ITS Industry Alliance (C-ITS) to
formally approve an RSS-based standard
The standard, “Technical Requirement of Safety Assurance of AV Decision
Making”, has been released to public and will take effect on March, 2020
• The world’s first standard, based on RSS
• Proof point that RSS can handle one of the world’s most challenging driving
environments: China
• The world’s first proposed parameter set that defines the balance between safety
and usefulness
The Path to Becoming an End-to-End
Mobility-as-a-Service Provider
MaaS Business Status
Mobileye is forging driverless MaaS as a near term revenue-generating channel
>
The JV to bring robotaxi MaaS to TelAviv is officially signed
>
Deploying and testing in Tel-Aviv
during this year
>
Establishing the regulatory framework
in Israel
> This year Mobileye will start using Nio ES8 for AV
testing and validation
> In 2022 launching a next-gen platform with
Mobileye’s L4 tech offered to consumers in China
> Robotaxi variant will be launched exclusively for
our robotaxi fleets
>
RATP and Mobileye partnered with the City of
Paris to deploy a driverless mobility solution
>
The first EU city where testing with
Mobileye’s AV will start this year
> Daegu City and Mobileye announce today a
partnership to start testing robotaxi MaaS in
South Korea this year
> Deployment during 2022
Our Self-Driving-System
HW Generations
EPM 6
EPM 59
EPM 52
> In deployment
> Up to 2x EQ5H
> Up to 7x8MP + 4x1.3MP
Up to 48 TOPs
> Deployment in 2023
> Single EQ6H to support E2E
functionality
> Deployment in Q2 2020
> Up to 6x EQ5H
> Additional 2-3 for FOP
> Additional EQ6H FOP
Up to 220 TOPs
> E2E support in all aspects- fusion, policy, control
Up to 216 TOPs
Main Takeaways
01
L2+ a growing new category for ADAS where Surround-CV unlocks considerable value at
volume production cost.
02
Realization of (safe) L4 and unlocking the full potential of L2+ requires Surround-CV at a
standalone (end-to-end) quality
03
L2+ required HD-map-everywhere at growing use-case (types of roads)
L4 requires
HD-maps
Consumer-AV requires HD-maps-everywhere
Automation at scale is
enabled by crowd-sourced data (REM)
04
Crowd-sourced data from ADAS-enabled vehicles (REM) unlocks great value for Smart Cities
05
To unlock the value of automation there is a need for “regulatory science” (RSS)
06
The road to Consumer-AV goes through Robotaxi MaaS
Thank You!
Download