InstLane Dataset and Geometry-Aware Network for Instance Segmentation of Lane Line Detection
-
Published:2024-07-28
Issue:15
Volume:16
Page:2751
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Cheng Qimin1ORCID, Ling Jiajun1ORCID, Yang Yunfei2ORCID, Liu Kaiji1, Li Huanying1ORCID, Huang Xiao3ORCID
Affiliation:
1. School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China 2. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China 3. Department of Environmental Sciences, Emory University, Atlanta, GA 30322, USA
Abstract
Despite impressive progress, obtaining appropriate data for instance-level lane segmentation remains a significant challenge. This limitation hinders the refinement of granular lane-related applications such as lane line crossing surveillance, pavement maintenance, and management. To address this gap, we introduce a benchmark for lane instance segmentation called InstLane. To the best of our knowledge, InstLane constitutes the first publicly accessible instance-level segmentation standard for lane line detection. The complexity of InstLane emanates from the fact that the original data are procured using cameras mounted laterally, as opposed to traditional front-mounted sensors. InstLane encapsulates a range of challenging scenarios, enhancing the generalization and robustness of the lane line instance segmentation algorithms. In addition, we propose GeoLaneNet, a real-time, geometry-aware lane instance segmentation network. Within GeoLaneNet, we design a finer localization of lane proto-instances based on geometric features to counteract the prevalent omission or multiple detections in dense lane scenarios resulting from non-maximum suppression (NMS). Furthermore, we present a scheme that employs a larger receptive field to achieve profound perceptual lane structural learning, thereby improving detection accuracy. We introduce an architecture based on partial feature transformation to expedite the detection process. Comprehensive experiments on InstLane demonstrate that GeoLaneNet can achieve up to twice the speed of current State-Of-The-Artmethods, reaching 139 FPS on an RTX3090 and a mask AP of 73.55%, with a permissible trade-off in AP, while maintaining comparable accuracy. These results underscore the effectiveness, robustness, and efficiency of GeoLaneNet in autonomous driving.
Funder
National Natural Science Foundation of China
Reference60 articles.
1. Guo, X., Cao, Y., Zhou, J., Huang, Y., and Li, B. (2023). HDM-RRT: A Fast HD-Map-Guided Motion Planning Algorithm for Autonomous Driving in the Campus Environment. Remote Sens., 15. 2. Yan, S., Zhang, M., Peng, Y., Liu, Y., and Tan, H. (2022). AgentI2P: Optimizing Image-to-Point Cloud Registration via Behaviour Cloning and Reinforcement Learning. Remote Sens., 14. 3. Aldibaja, M., Suganuma, N., and Yanase, R. (2022). 2.5D Layered Sub-Image LIDAR Maps for Autonomous Driving in Multilevel Environments. Remote Sens., 14. 4. Ling, J., Chen, Y., Cheng, Q., and Huang, X. (2024, January 14–19). Zigzag Attention: A Structural Aware Module For Lane Detection. Proceedings of the 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea. 5. Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., and Oliveira-Santos, T. (2021, January 10–15). Polylanenet: Lane estimation via deep polynomial regression. Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy.
|
|