Efficient Human-in-the-Loop Computer Vision Algorithms to Create Datasets of Rare Traffic Events from Video

Status: Complete

Lead Researcher(s)

Walter Lasecki
Assistant Professor of Electrical Engineering and Computer Science, College of Engineering and Assistant Professor of Information, School of Information

Jason Corso
Associate Professor of Electrical Engineering and Computer Science, College of Engineering

Project Team

Project Abstract

Create hybrid-intelligence pipelines for efficiently reconstructing 3D scenes from 2D video using crowdsourcing and computer vision. The research proposes a human-in-­the-loop system that efficiently combines annotations from human beings and computations from computers to create a dataset that contains videos of vehicle crashes. The important objects, like crashed vehicles, will be annotated by recruited crowd workers who will provide the measurements for dimensions of these objects. Computer vision tools will be developed to scale up the annotation process.

Project Outcome

(1) Novel algorithms for combining human and machine intelligence in a computationally and cost-efficient manner (that is able to balance compute time with human effort); (2) a working system for extracting fine­-grained scene information of rare traffic events from videos that can be used as training data in a range of connected and autonomous vehicle applications; (3) initial models of human and computer vision errors on fine­-grained parameter extraction tasks needed for 4D (3D + time) reconstruction; and (4) a proof ­of ­concept pipeline for using non­expert crowds to generate sets of annotations/measures from varied (2D) videos using a lattice.


BUDGET YEAR: 2017-01-01
IMPACT: SAFETY
RESEARCH CATEGORY: SIMULATION & TESTING