Sensor data fusion

advertisement
Multimodal sensing-based
camera applications
Miguel Bordallo1, Jari Hannuksela1, Olli Silvén1 and Markku Vehviläinen2
1
University of Oulu, Finland
2 Nokia Research Center, Tampere, Finland
Jari Hannuksela, Olli Silvén
Machine Vision Group, Infotech Oulu
Department of Electrical and Information Engineeering
University of Oulu, Finland
MACHINE VISION GROUP
Outline
Introduction
• Modern movile device with multiple sensors
Vision-based User Interfaces
Sensor data fusion system
Application case implementations
• Motion-based image browser
• Motion sensor assisted panorama imaging
Conclusions/Summary
MACHINE VISION GROUP
Introduction
• More and more applications and
features are being crammed into
handhelds
• Causes usability complications given
the constraints of current mobile UIs
• Increased computing power not
harnessed for UIs
• Keypad and pointer based UIs and/or
touchscreens in current devices
– User’s hand obstructs the view
– Require two handed operation
MACHINE VISION GROUP
Modern mobile device with multiple sensors
• The phone includes touch screen, GPS, accelerometers, light
sensor, proximity sensor
• Two cameras: low resolution for video calls and high
resolution for photography and video capture
• Newer phones will include magnetometers, gyroscopes
MACHINE VISION GROUP
Motivation for vision based
user interfaces
Allow recognition of the context
- Detect user’s actions
- Recognize environment
Allow 3D information
Provide interactivity
- Real-time feedback
- Single hand use
MACHINE VISION GROUP
Limitations of vision based UIs
Fast movements
+
Low light
Difficult conditions
MACHINE VISION GROUP
The solution: sensor data fusion
Fusing the data obtained by several sensors
• Ambience light sensor determines
illumination conditions
• Video analysis detects ego-movements
and analyzes the context
• Accelerometers provide complementary
motion data
MACHINE VISION GROUP
Video analysis
- Every frame divided into regions
- Selection of feature blocks
- Estimation of block displacements
- Analysis of uncertainty
- Results: 4-paramenter model
- X, Y, Z, r
MACHINE VISION GROUP
Sensor data fusion: Accelerometers
MACHINE VISION GROUP
Sensor data fusion
Model the device movement with the folowing
Define a state vector: position, speed, acceleration
Define a measurement model
Apply Kalman filtering adding accelerometer values:
State prediction + state correction
MACHINE VISION GROUP
Application cases
• Sensor data fusion method applied in two applications
– Implemented on a Nokia N900 mobile phone
• Motion based image browser
– Allows browsing large images and maps with one hand operation
– Works under different light conditions
• Sensor assisted panorama imaging
– Stitches panorama images in real time from video frames
– Increased robustness against fast movements and no-texture
frames
MACHINE VISION GROUP
Motion based image browser
Uses fusion model from accelerometers + video
analysis to generate commands
• Scroll up/down/left/right
• Zoom in/out
Light sensor decides:
• if camera should be turned on
• weighting factors and uncertainties
• 3 modes defined:
• Good image quality (video analysis + accelerometer correction)
• Bad image (accelerometers have increased contribution)
• No image (only accelerometers are used)
MACHINE VISION GROUP
Motion based image browser II
MACHINE VISION GROUP
Sensor assisted panorama Imaging
•Based on video analysis
•Guides the user with instructions
•>360 degrees panoramas
•Real-time registration
•Real-time frame evaluation and selection
•Real-time frame correction
•Increased robustness via sensor-data integration
MACHINE VISION GROUP
Panorama imaging: Sensor uses
Registration
•Uses sensor fusion model to compute camera
motion
•Increased robustness against fast movements
and frames with low/smooth texture
MACHINE VISION GROUP
Panorama: Sensor uses II
Selection
•Uses accelerometer data to detect blur
•Detects unwanted shake/tilt
•Integrated in scoring system
MACHINE VISION GROUP
Summary
• Vision based interfaces offer high interactivity with
one hand operation
• They present several limitations
• Sensor fusion improves motion estimation adding
robusness against fast movements and dark
conditions
• The framework can be included in several
applications (e.g. as a part of Motion Estimation API)
MACHINE VISION GROUP
Conclusions
• We have presented a sensor fusion framework that
fuses vide analysis with motion sensors
(acelerometers+magnetometers+gyroscopes)
• We have presented two applications cases that make
use of sensor data fusion and integration
• The applications presented are by no means the only
ways to apply vision or multiple sensors, and one
may find new interesting possibilities in further
research
MACHINE VISION GROUP
Thank you!
Any question???
MACHINE VISION GROUP
Download