Lidar Road Marking Segmentation

Mohamed Khaled
5 min readFeb 11, 2024

--

Detecting road markings in 3D pointcloud is a critical challenge for autonomous vehicles. These markings assist the car in figuring out where the lanes are. Without knowing the lanes, the autonomous system will lose its ability to plan its movements. Messing up the lane markings can throw off the entire autonomous system pipeline or at least decrease its robustness.

Right is FromCam, Left is the Lidar point cloud with the lane marking marked red

The classical approach of road marking segmentation is to use a clustering algorithm to get the ground plane and then via post-processing get the road marking from the ground. A significant limitation in this approach comes from the diverse terrains on the ground plane, extending beyond asphalt. Post-processing techniques rely on distinguishing asphalt based on specific intensity levels and differentiating markings with higher intensity. Introducing varied terrains disrupts this distinction, causing post-processing methods to falter. Consequently, the algorithm might mistakenly detect lane markings on non-drivable surfaces, posing a serious risk for autonomous vehicles. It could lead the vehicle to believe there are lanes where there aren’t any, creating potential dangers during navigation.

GitHub repo for the project.

To address these challenges, we have chosen to break down the problem

into two distinct sub-problems:

  1. Drivable Surface Extraction: This involves extracting only the asphalt part of the road meant for cars excluding elements like sidewalks and pavements.
  2. Road Markings Segmentation Algorithm: Employing a unique segmentation algorithm on the drivable surface, to precisely extract the road markings.

By effectively tackling the first sub-problem and extracting the drivable surface accurately, we pave the way for optimal performance in the following step of road markings segmentation.

Drivable Surface Extraction (Cylinder3D model)

To achieve precise drivable surface extraction, traditional algorithms utilizing handcrafted features often struggle to generalize on different scenes and environments effectively. Wanting to achieve a more adapting approach, we decided to use a deep learning model. Since good deep-learning models excel in generalization and can have the ability to adapt to different driving scenes, enhancing the robustness of drivable surface extraction. We selected the NuScenes LiDAR Segmentation dataset for its inclusion of the drivable surface class. Our choice of the Cylindrical 3D Convolution Network model was influenced by its strong performance on the NuScenes LiDAR Segmentation Challenge leaderboard at the time of our experiment. Additionally, the availability of open-source code further influenced our selection.Cylinder3D starts by voxelizing the point cloud into cylindrical voxels, a departure from traditional cubic voxelization. Subsequently, it employs a UNet-like architecture to segment the voxels, assigning each a specific class. The distinctive feature of this paper lies in its unique cylindrical voxelization method, setting it apart from conventional approaches.

Cylinder3D Architecture (from paper)

Cylindrical voxelization enhances model performance by ensuring a more uniform distribution of points within the voxels. In contrast, cubic voxelization often concentrates points in a small percentage of voxels, leading to significant information loss. With voxelization, the voxel’s class is determined by majority voting, resulting in the discarding of points with classes different from the majority. To minimize information loss, a more even distribution of points is crucial. The advantage of well-distributed points among cylindrical voxels lies in reducing information loss, thereby contributing to improved model accuracy and robustness.

The proportion of non-empty cells at different distances between cylindrical and cubic partitions (from paper)

The image below illustrates the model’s performance on nuScenes data. The drivable surface class is in light blue and, as shown in the image, the model identifies the drivable surface well.

Segmentation output of Cylinder3D model

Road Markings Segmentation Algorithm

With the drivable surface successfully extracted, the next step is to figure out how to obtain road markings. We started by visualizing and exploring the data to determine the best approach. In the image below, only the drivable surface is visualized showing its intensity, revealing that road markings are visible. This observation motivated us to proceed with extracting the road markings.

Grey Scale visualization of the drivable surface

At first, we went for a straightforward approach by applying Otsu’s thresholding technique to the entire drivable surface. However, the output did not meet our expectations. After some experimentation, we discovered that each laser scanner in the LiDAR senses intensity with a slight variation. For example, laser_1 might register a 0.9 intensity value for a white road mark, while laser_2 records 0.7 intensity. This variation may be a result of different characteristics of each laser scanner in the LiDAR. The difference also appears in the above image.

Our approach involved obtaining a unique thresholding value for each ring, where a ring represents the points originating from the same laser scanner in the LiDAR. We again utilized Otsu’s method to determine the optimal threshold for distinguishing between the asphalt and road markings. Notably, the nuScenes dataset’s LiDAR comprises 32 laser scanners, resulting in a point cloud structure with 32 rings. The nuScenes dataset makes things easy for us by giving data in a simple format: x, y, z, intensity, ring_id. This makes it straightforward to get points belonging to the same laser scanner since each laser scanner forms its ring.

The following showcases the output of our method applied to an entire scene in nuScenes:

Left is front cam, right is the point cloud where the red points are the road marks

You can access and use our code on different scenes by visiting the following link https://github.com/MohamedElhadidy0019/Lidar-Road-Marking-Segmentation. The ‘landmarks.py’ file contains our unique approach for road marking extraction.

I hope you enjoyed exploring this new approach in the article. If you have any more questions or need further assistance, feel free to ask. Happy coding!

--

--