Overview
This course provides a detailed understanding of perception algorithms used in autonomous driving systems. Participants will explore essential concepts like lane detection, scene detection, and object detection using camera and lidar sensor data. The course also covers simulation using CARLA and ROS, focusing on developing practical skills for implementing and evaluating perception algorithms in a simulated environment.
Objectives
By the end of this course, leaner will be able to:
-
Understand the role and challenges of perception in autonomous driving.
-
Implement lane detection algorithms using computer vision techniques.
-
Perform scene detection and understanding.
-
Detect and track static and dynamic objects using cameras and lidar.
-
Understand sensor models and their limitations.
-
Establish a ROS-CARLA bridge for communication and data exchange.
-
Develop and evaluate perception algorithms in the CARLA simulator.
-
Analyze and optimize the performance of perception systems.
Prerequisites
-
Modern C++
-
Python programming
-
Basic image and signal processing
-
Linear algebra
Course Outline
-
Role of perception in autonomous driving systems
-
Object detection, tracking, classification, and scene understanding
-
Overview of sensors: cameras, lidar, radar, and ultrasonic sensors
-
Perception challenges: lighting conditions, occlusions, sensor noise
-
Introduction to CARLA simulator and setting up the CARLA-ROS environment
-
Camera calibration and image rectification
-
Color space transformations and edge detection
-
Hough transform for line detection
-
Lane marking detection and tracking
-
Curve fitting and lane modeling
-
Practical exercises on implementing lane detection algorithms in CARLA
-
Semantic segmentation and scene labeling
-
Deep learning for scene understanding
-
Road segmentation, obstacle detection, and free space estimation
-
Scene context analysis for perception enhancement
-
Practical exercises using deep learning models for scene detection
-
Object detection using traditional methods like HOG and Haar features
-
Deep learning-based object detection with YOLO and SSD
-
Object tracking using Kalman filter and particle filter
-
3D object detection using lidar point clouds
-
Sensor fusion for enhanced object detection and tracking
-
Practical implementation of detection and tracking algorithms in CARLA
-
Camera models and image formation
-
Lens distortion and calibration techniques
-
Lidar principles and point cloud processing
-
Sensor calibration and error management
-
Practical sensor calibration exercises using CARLA
-
Overview of Robot Operating System (ROS)
-
Establishing a ROS-CARLA bridge for sensor data communication
-
Publishing and subscribing to sensor data in ROS
-
Controlling CARLA simulations using ROS messages
-
Hands-on exercises in data exchange using ROS-CARLA bridge
-
Implementing perception algorithms in ROS within CARLA
-
Evaluating perception algorithms in dynamic scenarios
-
Analyzing results and identifying performance bottlenecks
-
Optimizing perception algorithms for real-time performance
-
Practical implementation of complete perception pipelines in CARLA