3rd Workshop on Maritime Computer Vision (MaCVi)
MaCVi 2025 challenges are open! Submissions close on 19th of December 2024. Good luck!
Challenges / USV-based Obstacle Segmentation
USV-based Obstacle Segmentation
Quick links:
Dataset download
Submit
Leaderboards
Ask for help
Quick Start
- Download the LaRS dataset on the dataset page.
- Train your model on the LaRS training set.
- Upload a .zip file with predictions on the upload page.
Overview
MaCVi 2025's USV challenges feature the LaRS Dataset. LaRS focuses on scene diversity and covers a wide range of environments including inline waters such as lakes,
canals and rivers.
To perform well on LaRS, one needs to build a robust model that generalizes well to various situations.
For the Obstacle Segmentation track, your task is to develop a semantic segmentation method that classifies
the pixels in a given input image into one of three classes: sky, water or obstacles. You may train your methods on
the LaRS training set, which has been designed specifically for this use case. You may also use additional publicly
available data to train your method. In this case, please disclose this during the submission process.
Task
Create a semantic segmentation method that classifies the pixels in a given image into one of three classes:
sky, water or obstacle. An obstacle is everything that the USV
can crash into or that it should avoid (e.g. boats, swimmers, land, buoys).
Dataset
LaRS consists of 4000+ USV-centric scenes captured in various aquatic domains. It includes per-pixel panoptic masks for water, sky and different types of obstacles. On a high level, obstacles are divided into i) dynamic obstacles, which are objects floating in the water (e.g. boats, buoys, swimmers) and ii) static obstacles, which are all remaining obstacle regions (shoreline, piers). Additionally, dynamic obstacles are categorized into 8 different obstacle classes: boat/ship, row boat, buoy, float, paddle board, swimmer, animal and other. More information >
This challenge is based on the semantic segmentation sub-track of LaRS: the annotations include semantic segmentation masks, where all obstacles are assigned into a single "obstacle" class.
Evaluation metrics
LaRS evaluation protocol is designed to score the predictions in a way meaningful for practical USV navigation.
Methods are evaluated in terms of:
- Water-edge segmentation (static obstacles) is evaluated by the prediction quality of the
boundary between water and static obstacles in terms of accuracy (the per-pixel classification accuracy within a
narrow belt around the ground-truth water-static-obstacle boundary - μ).
- Dynamic obstacle detection is evaluated in terms of a number of true positive (TP), false
positive (FP) and false negative (FN) detections and summarized by precision (Pr), recall
(Re) and F1-score (F1). An obstacle is considered
detected (TP) if the predicted segmentation coverage of the obstacle class inside the GT obstacle mask is
sufficient
(>70%). Predicted segmentation blobs outside GT obstacle masks are considered as FP detections.
- Segmentation quality is evaluated in terms of mean IoU (intersection over union) between the
predicted and ground-truth segmentation masks (mIoU).
To determine the winner of the challenge, we use an aggregate metric Q (Quality) = mIoU x F1, combining aspects of general segmentation quality measured by the mIoU and
detection quality measured by the F1 score. In case of a tie, F1 will be considered.
Furthermore, we require every participant to submit information on the speed of their method measured in frames per
second (FPS). Please also indicate the hardware that you used for benchmarking the speed. Lastly, you should indicate which data sets (also for
pretraining) you used during training.
Participate
To participate in the challenge follow these steps:
- Download the LaRS dataset ( LaRS webpage).
- Train a semantic segmentation model on the LaRS training set. You can
also use additional publicly available training data, but must disclose it during submission.
- Note: the mmsegmentation-macvi may be a good
starting point for the development of you model. The repository contains the scripts for training and inference on LaRS.
- Generate segmentation predictions on the LaRS test images.
- The predictions should be stored as .png files with color-coded predictions:
- sky:
[90, 75, 164]
- water:
[41, 167, 224]
- obstacle:
[247, 195, 37]
- The names of the .png files should match the names of test images.
- Also note the performance of your method (in FPS) and the hardware used.
- Create a submission .zip archive with your predictions.
- The prediction .png files should be placed directly in the root of the .zip file (no extra directories).
- Refer to the example submission file for additional information ( lars_seg_example.zip).
- Upload your .zip file along with all the required information here. You need to
register in order to submit your results.
- After submission, your results will be evaluated on the server. This may take around 10+ minutes. Please refresh
the dashboard page to see results. The dashboard will also display potential errors in case of failed
submissions (hover over the error icon). You may evaluate at most one submission per day (per challenge track).
Failed attempts do not count towards this limit.
Terms and Conditions
- Submissions must be made before the deadline as listed on the dates page.
Submissions made after the deadline will not count towards the final results of the challenge.
- Submissions are limited to one per day per challenge. Failed submissions do not count towards
this limit.
- The winner is determined by the F1 metric (μ and mIoU in case
of tie).
- You are allowed to use additional publicly available data for training but you must disclose them at the time of
upload.
This also applies to pre-training.
- In order for your method to be considered for the winning positions and included in the results paper, you will
be required to submit a short report describing your method. More information in regards to this will be
released towards the end of the challenge.
- Note that we (as organizers) may upload models for this challenge, BUT we do not compete for a winning position
(i.e. our models do not count on the leaderboard and merely serve as references). Thus, if your method is worse
(in any metric)
than one of the organizer's, you are still encouraged to submit your method. Methods that were submitted as part
of the MaCVi 2024 challenge will be marked on the leaderboards.
In case of any questions regarding the challenge datasets or submission, please join the
MaCVi Support forum.