Object Detection, Tracking and Fusion using both Infrastructure and Vehicle-based Sensors

Status: In Progress

Lead Researcher(s)

Tinghan Wang

Henry Liu
Professor of Civil and Environmental Engineering, College of Engineering and Research Professor, Engineering Systems, University of Michigan Transportation Research Institute-Administration

Huei Peng
Roger L McCarthy Professor of Mechanical Engineering, Professor of Mechanical Engineering, College of Engineering and Director of Mcity

Project Team

Tinghan Wang

Project Abstract

One challenge of self-driving cars is the imperceptible obstacles or road users that are blocked by, e.g., buildings, trees, and mountains. Generally, an autonomous vehicle using its own sensors may be aware of the danger too late and thus fall into a high risk situation. V2V communication can be a potential solution, but many road obstacles (e.g., an animal or a box) are not actually connected. Another more robust solution would be roadside sensors (e.g., camera and Lidar) deployed in obscured line of sight scenarios, e.g., intersections and sharp curves. By utilizing advanced communication techniques like 5G, infrastructure sensors and vehicles can share either raw sensor data or processed results. Involving infrastructure sensors in a CAV’s perception system provides safer and more timely perception, which definitely improves the safety of self-driving cars and accelerates their deployment in real traffic.
This project aims to 1) develop robust object detection, tracking, and fusion algorithms using roadside cameras and Lidar. Roadside server with powerful GPU or other computing units will be the edge computing solution. 2) Develop accurate fusion algorithms of the infrastructure and vehicle-based sensors at the raw-data level and/or the object level. 3) Develop self-driving system to validate and demonstrate the infrastructure-sensor-enhanced perception system.

Project Outcome

This project aims to 1) develop robust object detection, tracking, and fusion algorithms using roadside camera and Lidar. Roadside server with powerful GPU or other computing units will be the edge computing solution. 2) Develop accurate fusion algorithms of the infrastructure and vehicle-based sensors at the raw-data level and/or the object level. 3) Develop self-driving system to validate and demonstrate the infrastructure-sensor-enhanced perception system.


BUDGET YEAR: 2020-01-01
IMPACT: SAFETY