3rd Workshop on Maritime Computer Vision (MaCVi)

Challenges / USV-based Obstacle Segmentation

USV-based Obstacle Segmentation

Quick links: Dataset download Submit Leaderboards Ask for help

Quick Start

  1. Download the LaRS dataset on the dataset page.
  2. Train your model on the LaRS training set.
  3. Upload a .zip file with predictions on the upload page.

Overview

MaCVi 2025's USV challenges feature the LaRS Dataset. LaRS focuses on scene diversity and covers a wide range of environments including inline waters such as lakes, canals and rivers. To perform well on LaRS, one needs to build a robust model that generalizes well to various situations.

For the Obstacle Segmentation track, your task is to develop a semantic segmentation method that classifies the pixels in a given input image into one of three classes: sky, water or obstacles. You may train your methods on the LaRS training set, which has been designed specifically for this use case. You may also use additional publicly available data to train your method. In this case, please disclose this during the submission process.

Task

Create a semantic segmentation method that classifies the pixels in a given image into one of three classes: sky, water or obstacle. An obstacle is everything that the USV can crash into or that it should avoid (e.g. boats, swimmers, land, buoys).

Dataset

LaRS consists of 4000+ USV-centric scenes captured in various aquatic domains. It includes per-pixel panoptic masks for water, sky and different types of obstacles. On a high level, obstacles are divided into i) dynamic obstacles, which are objects floating in the water (e.g. boats, buoys, swimmers) and ii) static obstacles, which are all remaining obstacle regions (shoreline, piers). Additionally, dynamic obstacles are categorized into 8 different obstacle classes: boat/ship, row boat, buoy, float, paddle board, swimmer, animal and other. More information >

This challenge is based on the semantic segmentation sub-track of LaRS: the annotations include semantic segmentation masks, where all obstacles are assigned into a single "obstacle" class.

Evaluation metrics

LaRS evaluation protocol is designed to score the predictions in a way meaningful for practical USV navigation. Methods are evaluated in terms of:

  1. Water-edge segmentation (static obstacles) is evaluated by the prediction quality of the boundary between water and static obstacles in terms of accuracy (the per-pixel classification accuracy within a narrow belt around the ground-truth water-static-obstacle boundary - μ).
  2. Dynamic obstacle detection is evaluated in terms of a number of true positive (TP), false positive (FP) and false negative (FN) detections and summarized by precision (Pr), recall (Re) and F1-score (F1). An obstacle is considered detected (TP) if the predicted segmentation coverage of the obstacle class inside the GT obstacle mask is sufficient (>70%). Predicted segmentation blobs outside GT obstacle masks are considered as FP detections.
  3. Segmentation quality is evaluated in terms of mean IoU (intersection over union) between the predicted and ground-truth segmentation masks (mIoU).

To determine the winner of the challenge, we use an aggregate metric Q (Quality) = mIoU x F1, combining aspects of general segmentation quality measured by the mIoU and detection quality measured by the F1 score. In case of a tie, F1 will be considered.

Furthermore, we require every participant to submit information on the speed of their method measured in frames per second (FPS). Please also indicate the hardware that you used for benchmarking the speed. Lastly, you should indicate which data sets (also for pretraining) you used during training.

Participate

To participate in the challenge follow these steps:

  1. Download the LaRS dataset ( LaRS webpage).
  2. Train a semantic segmentation model on the LaRS training set. You can also use additional publicly available training data, but must disclose it during submission.
    • Note: the mmsegmentation-macvi may be a good starting point for the development of you model. The repository contains the scripts for training and inference on LaRS.
  3. Generate segmentation predictions on the LaRS test images.
    • The predictions should be stored as .png files with color-coded predictions:
      • sky: [90, 75, 164]
      • water: [41, 167, 224]
      • obstacle: [247, 195, 37]
    • The names of the .png files should match the names of test images.
    • Also note the performance of your method (in FPS) and the hardware used.
  4. Create a submission .zip archive with your predictions.
    • The prediction .png files should be placed directly in the root of the .zip file (no extra directories).
    • Refer to the example submission file for additional information ( lars_seg_example.zip).
  5. Upload your .zip file along with all the required information here. You need to register in order to submit your results.
    • After submission, your results will be evaluated on the server. This may take around 10+ minutes. Please refresh the dashboard page to see results. The dashboard will also display potential errors in case of failed submissions (hover over the error icon). You may evaluate at most one submission per day (per challenge track). Failed attempts do not count towards this limit.

Terms and Conditions

In case of any questions regarding the challenge datasets or submission, please join the MaCVi Support forum.