Simulated RGB and LiDAR Image based Training of Object Detection Models in the Context of Autonomous Driving
Keywords:
Obtection detection, Synthetic data, RGB, LiDAR, Autonomous driving, Deep learning, Computer visionAbstract
The topic of object detection, which involves giving cars the ability to perceive their environment has drawn greater attention. For better performance, object detection algorithms often need huge datasets, which are frequently manually labeled. This procedure is expensive and time-consuming. Instead, a simulated environment due to which one has complete control over all parameters and allows for automated image annotation. Carla, an open-source project created exclusively for the study of autonomous driving, is one such simulator. This study examines if object detection models that can recognize actual traffic items can be trained using automatically annotated simulator data from Carla. The findings of the experiments demonstrate that optimizing a trained model using Carla’s data, along with some real data, is encouraging. The Yolov5 model, trained using pre-trained Carla weights, exhibited improvements across all performance metrics compared to one trained exclusively on 2000 Kitti images. While it didn’t reach the performance level of the 6000-image Kitti model, the enhancements were indeed substantial. The mAP0.5:0.95 score saw an approximate 10% boost, with the most significant improvement occurring in the Pedestrian class. Furthermore, it is demonstrated that a substantial performance boost can be achieved by training a base model with Carla data and fine-tuning it with a smaller portion of the Kitti dataset. Moreover, the potential utility of Carla LiDAR images in reducing the volume of real images required while maintaining respectable model performance becomes evident. Our code is available at: https://tinyurl.com/3fdjd9xb.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Norsk IKT-konferanse for forskning og utdanning
This work is licensed under a Creative Commons Attribution 4.0 International License.