ABOUT THE SYSTEM

Key features of our system

High-Resolution Analysis

High Robustness

High Accuracy

Large-Scale Image Analytics

Semantic Segmentation

Depth Estimation

3D Driving Environment Estimation

Fusion of Deep Learning Detection and Explainable Machine Learning

System Overview

This system, is to extract and understand driving visual environment along roads by using street view images (e.g., Google Street View Panorama) and videos from cameras on vehicles. Semantic segmentation and depth estimation are conducted first to get the clustering and depth information at each pixel in images or videos. Then, the orthographic transformation is applied to transfer the 2D images to 3D images, which reflect driving visual view in the real world. Based on the proposed system, the following information could be generated from street view images and videos:


The system can be applied for images and videos at the street-level which are collected at different types of road facilities, such as freeways, arterials, intersections, bike lanes, and sidewalks

View More Example

Buildings
75+
Miles Roads
15,000
Street View Images
1+
publication
2+
Funded Projects

THE TEAM

The ones who makes this happen

John

Dr. Mohamed Abdel-Aty

P.E., F.ASCE Trustee Chair

Jane

Dr. Yina Wu

Research Associate Professor

Mike

Dr. Qing Cai

Research Assistant Professor

Dan

Ou Zheng

Software Engineer

Our Performance.

Accuracy

90%+

Scalability Level

85%+

Robustness

80%+

OUR Example

What we've done for safety

Detection1
Detetion2
SHAP Value
X