Due to the ongoing coronavirus pandemic, the Mcity Test Facility and Office Building will be closed until further notice. Although our facilities will be closed, the Mcity staff will still be working from home during normal business hours. Please feel free to contact us with any questions or concerns you may have during this time.

Object Detection, Tracking and Fusion using both Infrastructure and Vehicle-based Sensors

Status: In Progress

Lead Researcher(s)

Shaobing Xu

Project Team

Project Abstract

One challenge of self-driving cars is the imperceptible obstacles or road users that are blocked by, e.g., buildings, trees, and mountains. Generally, an autonomous vehicle using its own sensors may be aware of the danger too late and thus fall into a high risk situation. V2V communication can be a potential solution, but many road obstacles (e.g., an animal or a box) are not actually connected. Another more robust solution would be roadside sensors (e.g., camera and Lidar) deployed in obscured line of sight scenarios, e.g., intersections and sharp curves. By utilizing advanced communication techniques like 5G, infrastructure sensors and vehicles can share either raw sensor data or processed results. Involving infrastructure sensors in a CAV’s perception system provides safer and more timely perception, which definitely improves the safety of self-driving cars and accelerates their deployment in real traffic.
This project aims to 1) develop robust object detection, tracking, and fusion algorithms using roadside cameras and Lidar. Roadside server with powerful GPU or other computing units will be the edge computing solution. 2) Develop accurate fusion algorithms of the infrastructure and vehicle-based sensors at the raw-data level and/or the object level. 3) Develop self-driving system to validate and demonstrate the infrastructure-sensor-enhanced perception system.

Project Outcome

This project aims to 1) develop robust object detection, tracking, and fusion algorithms using roadside camera and Lidar. Roadside server with powerful GPU or other computing units will be the edge computing solution. 2) Develop accurate fusion algorithms of the infrastructure and vehicle-based sensors at the raw-data level and/or the object level. 3) Develop self-driving system to validate and demonstrate the infrastructure-sensor-enhanced perception system.


BUDGET YEAR: 2020-01-01
IMPACT: SAFETY