2019 IEEE GRSS Data Fusion Contest
Large-Scale Semantic 3D Reconstruction
The Contest: Goals and Organization
The 2019 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Johns Hopkins University (JHU), and the Intelligence Advanced Research Projects Activity (IARPA), aims to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images.
The global objective is to reconstruct both a 3D geometric model and a segmentation of semantic classes for an urban scene. Incidental satellite images, airborne lidar data, and semantic labels are provided to the community. The 2019 Data Fusion Contest will consist of four parallel and independent competitions, corresponding to four diverse tasks:
- Track 1: Single-view Semantic 3D Challenge
- Track 2: Pairwise Semantic Stereo Challenge
- Track 3: Multi-view Semantic Stereo Challenge
- Track 4: 3D Point Cloud Classification Challenge
Scientific papers describing the best entries (as quantified by the scores of the confusion matrix and accuracy parameters) will be included in the Technical Program of IGARSS 2019, presented in an oral Invited Session, and published in the IGARSS 2019 Proceedings.
Competition Phases
The contest aims to promote innovation in semantic 3D reconstruction and stereo algorithms, as well as to provide objective and fair comparisons among methods. The ranking is based on quantitative accuracy parameters computed with respect to undisclosed test samples. Participants will be given a limited time to submit their semantic 3D maps after the competition started. The contest will consist of two phases:
- Phase 1: Participants are provided with training data (which includes ground truth) and validation data (without ground truth) to train and validate their algorithms. Participants can submit prediction results for the validation set to DASE to get feedback on the performances. Top 10 submissions will be displayed on the leaderboard.
- Phase 2: Participants receive the test data set (without the corresponding ground truth) and submit their semantic 3D maps within two weeks from the release of the test data set. In parallel, they submit a short description of the approach used. After evaluation of the results, eight winners are announced. Following this, they will have one month to write their manuscript that will be included in the IGARSS proceedings. Manuscripts are 4-page IEEE-style formatted. Each manuscript describes the addressed problem, the proposed method, and the experimental results.
Calendar:
January 9th Contest opening: release of training and validation data February 7th Validation server with public leaderboard is open. March 7th Release of test data; test server is open. March 22th Submission of results deadline:
the submission server is closedMarch 26th Short description of the approach is sent to iadf_chairs@grss-ieee.org (using IGARSS paper template) March 29th Winner announcement
The Data
In the contest, we provide Urban Semantic 3D (US3D) data, a large-scale public dataset including multi-view, multi-band satellite images and ground truth geometric and semantic labels for two large cities [1]. The US3D dataset includes incidental satellite images, airborne lidar, and semantic labels covering approximately 100 square kilometers over Jacksonville, Florida and Omaha, Nebraska, United States. For the contest, we provide train and test datasets for each challenge track including approximately twenty percent of the US3D data.
- Incidental Satellite Images: For the contest, WorldView-3 panchromatic and 8-band visible and near infrared (VNIR) images are provided courtesy of DigitalGlobe. Source data consists of 26 images collected between 2014 and 2016 over Jacksonville, Florida, and 43 images collected between 2014 and 2015 over Omaha, Nebraska, United States. Ground sampling distance (GSD) is approximately 35 cm and 1.3 m for panchromatic and VNIR images, respectively. VNIR images are all pan-sharpened. Satellite images are provided in geographically non-overlapping tiles, where airborne lidar data and semantic labels are projected into the same plane. Unrectified images (for Tracks 1 and 3) and epipolar rectified image pairs (for Track 2) are provided as TIFF files.
- Airborne LiDAR data are used to provide ground-truth geometry. The aggregate nominal pulse spacing (ANPS) is approximately 80 cm. Point clouds are provided in ASCII text files with format {x, y, z, intensity, return number} for Track 4. Training data derived from lidar includes ground truth above ground level (AGL) height images for Track 1, pairwise disparity images for Track 2, and digital surface models (DSM) for Track 3, all provided as TIFF files.
- Semantic labels are provided as TIFF files for each geographic tile in Tracks 1-3 and ASCII text files in Track 4. Semantic classes in the contest include buildings, elevated roads and bridges, high vegetation, ground, water, etc.
We provide all the above datasets for the training regions only. For the validation and test regions, only satellite images are provided in Tracks 1-3 and only lidar point clouds are provided in Track 4. The ground truth for the validation and test sets remains undisclosed and will be used for evaluation of the results. The training and test sets for the contest include dozens of images for each geographic 500m x 500m tile: 111 tiles for the training set; 10 tiles for the validation set; 10 tiles for the test set.
Register for the contest to get data
RGB images and ground truth for disparities and semantics for the training region are shown in Fig. 1. Point-clouds and 3D semantic labels for the training region are shown in Fig. 2.
Fig. 1 From left to right: Stereo correspondence with seasonal appearance differences, ground truth disparities and semantic labels
Reference:
[1] Bosch, M. ; Foster, G. ; Christie, G. ; Wang, S. ; Hager, G.D. ; Brown, M. : Semantic Stereo for Incidental Satellite Images. Proc. of Winter Conf. on Applications of Computer Vision, 2019.
Challenge Tracks
Track 1: Single-view semantic 3D
For each geographic tile, an unrectified single-view image is provided. The objective is to predict semantic labels and normalized DSM (nDSM) above-ground heights. Participants of Track 1 are intended to submit 2D semantic maps and AGL maps in raster format (similar to the tif file of the training set). Performance is assessed using the pixel-wise mean Intersection over Union (mIoU) for which true positives must have both the correct semantic label and height error less than a threshold of 1 meter. We call this metric mIoU-3.
Track 2: Pairwise semantic stereo
For each geographic tile, a pair of epipolar rectified images is given. The objective is to predict semantic labels and stereo disparities. Participants of Track 1 are intended to submit 2D semantic maps and disparity maps in raster format (similar to the tif file of the training set). Performance is assessed using mIoU-3 with a threshold of 3 pixels for disparity values.
Track 3: Multi-view semantic stereo
Given multi-view images for each geographic tile, the objective is to predict semantic labels and a DSM. Unrectified images are provided with RPC metadata already adjusted using the lidar so that registration is not required in evaluation and so that solutions can focus on methods for image selection, correspondence, semantic labeling, and multi-view fusion. Since this track relies on RPC metadata which may not be familiar to everyone, the baseline algorithm provided includes simple python code to manipulate RPC for epipolar rectification and triangulation. Participants of Track 3 are intended to submit 2D semantic maps and DSMs in raster format (similar to the tif file of the training set). Performance is assessed using mIoU-3 with a threshold of 1 meter for the DSM Z values.
Track 4: 3D point cloud classification
For each geographic tile, lidar point cloud data is provided. The objective is to predict a semantic label for each 3D point. Participants of Track 4 are intended to submit 3D semantic predictions in ASCII text files (similar to the text files of the training set). Performance is assessed using mIoU.
Baseline methods
Baseline solutions are provided for each challenge track to help participants get started quickly and better understand the data and its intended use. Deep learning models for image semantic segmentation (for Tracks 1, 2, and 3), point cloud semantic segmentation (for Track 4), single-image height prediction (for Track 1), and pairwise stereo disparity estimation (for Tracks 2 and 3) are provided. Each of these was implemented in Keras with TensorFlow. The models, python code to train them, and python code for inference are provided. A baseline semantic MVS solution (for Track 3) implemented in python is also provided to clearly demonstrate the use of RPC metadata for basic tasks such as epipolar rectification and triangulation.
Register for the contest to get baselines
Fig. 2 (Left) Point cloud data involved in the 2019 Data Fusion Contest and (right) 3D semantic labels
Results, Awards, and Prizes:
The following eight teams will be declared as winners:
- The first and second ranked teams in Single-view Semantic 3D Challenge
- The first and second ranked teams in Pairwise Semantic Stereo Challenge
- The first and second ranked teams in Multi-view Semantic Stereo Challenge
- The first and second ranked teams in 3D Point Cloud Classification Challenge
The authors of the eight winning submissions will:
- Present their manuscripts in an oral Invited Session dedicated to the Contest at IGARSS 2019
- Publish their manuscripts in the Proceedings of IGARSS 2019
- Be awarded IEEE Certificates of Recognition. The award ceremony will take place during the Technical Committees and Chapter Chairs Dinner at IGARSS 2019, Yokohama, Japan in July 2019
The authors of the 1st ranked team in the four tracks will receive as a special prize at IGARSS 2019.
Five selected teams, namely, the first ranked teams of Single-view Semantic 3D Challenge, Pairwise Semantic Stereo Challenge, and Multi-view Semantic Stereo Challenge, and the first and second ranked teams in 3D Point Cloud Classification Challenge, will Co-author journal papers (in a limit of 3 co-authors per submission), which will summarize the outcome of the Contest and will be submitted to IEEE JSTARS. To maximize impact and promote the potential of semantic 3D reconstruction and stereo applied to satellite images, the open-access option will be used for this journal submission.
The costs for open-access publication and for the winners’ participation to the Technical Committees and Chapter Chairs Dinner at IGARSS 2019 will be supported by the GRSS. The winner team prize is kindly sponsored by the IGARSS 2019 Team.
The rules of the game:
- Data can be requested by registering for the Contest. Participants must read and accept the Contest Terms and Conditions.
- Participants of the contest are intended to submit:
- 2D semantic maps and nDSM/disparity/DSM maps in raster format (similar to the tif file of the training set) for Tracks 1, 2, and 3
- 3D semantic predictions in ASCII text files (similar to the text file of the training set) for Track 4
These results will be submitted to the Codalab competition websites for evaluation:
Track 1: https://competitions.codalab.org/competitions/20208
Track 2: https://competitions.codalab.org/competitions/20212
Track 3: https://competitions.codalab.org/competitions/20216
Track 4: https://competitions.codalab.org/competitions/20217 - Ranking among the participants will be based on:
- mIoU-3 for Tracks 1, 2, and 3
- mIoU for Track 4
- Institutional or business E-mail accounts should be used for registration.
- One E-mail account is allowed for one team.
- The maximum number of trials of one team for each classification challenge is ten in the test phase.
- Deadline for classification result submission is March 22, 2019, 23:59 UTC – 12 hours (e.g., March 22, 2019, 7:59 in New York City, 13:59 in Paris, or 19:59 in Beijing). Submission server will be opened from March 7, 2019.
- Each team needs to submit a short paper of 2 pages describing the used approach by March 26, 2019. Please send a paper to iadf_chairs@grss-ieee.org using the IGARSS paper template. One and only one submission originating from each team will be allowed to the Contest. Should multiple entries from the same team be received, then exclusively the best submission will be considered.
- For the eight winners, internal deadline for full paper submission is April 26, 2019, 23:59 UTC – 12 hours (e.g., April 26, 2019, 7:59 in New York City, 13:59 in Paris, or 19:59 in Beijing). IGARSS Full paper submission is May 27, 2019.
Failure to follow any of these rules will automatically make the submission invalid, resulting in the manuscript not being evaluated and disqualification from prize award.
Participants to the Contest are requested not to submit an extended abstract to IGARSS 2019 by the corresponding conference deadline in January 2019. Only contest winners (participants corresponding to the eight best-ranking submissions) will submit a 4-page paper describing their approach to the Contest by April 26, 2019. The received manuscripts will be reviewed by the Award Committee of the Contest, and reviews sent to the winners. Then winners will submit the final version of the 4 full-paper to IGARSS Data Fusion Contest Invited Session by May 27, 2019, for inclusion in the IGARSS Technical Program and Proceedings.
Acknowledgements
The IADF TC chairs would like to thank IARPA and the Johns Hopkins University Applied Physics Laboratory for providing the data and the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources.
The data are provided for the purpose of participation in the 2019
Data Fusion Contest. Participants acknowledge that they have read and
agree to the following Contest Terms and Conditions:
- The owners of the data and of the copyright on the data are DigitalGlobe, IARPA and Johns Hopkins University.
- Any dissemination or distribution of the data packages by any registered user is strictly forbidden.
- The data can be used in scientific publications subject to approval
by the IEEE GRSS Image Analysis and Data Fusion Technical Committee and
by the data owners on a case-by- case basis. To submit a scientific
publication for approval, the publication shall be sent as an attachment
to an e-mail addressed to iadf_chairs@grss-ieee.org. - In any scientific publication using the data, the data shall be
identified as “grss_dfc_2019” and shall be referenced as follows: “[REF.
NO.] 2019 IEEE GRSS Data Fusion Contest. Online:
http://www.grss-ieee.org/community/technical-committees/data-fusion”. - Any scientific publication using the data shall include a section
“Acknowledgement”. This section shall include the following sentence:
“The authors would like to thank the Johns Hopkins University Applied Physics Laboratory and IARPA for providing the data used in this study, and the IEEE GRSS Image Analysis and Data Fusion Technical Committee for organizing the Data Fusion Contest. - Any scientific publication using the data shall refer to the following papers:
- [Bosch et al., 2019] Bosch, M. ; Foster, G. ; Christie, G. ; Wang, S. ; Hager, G.D. ; Brown, M. : Semantic Stereo for Incidental Satellite Images. Proc. of Winter Conf. on Applications of Computer Vision, 2019.
- [Le Saux et al., 2019] Le Saux, B. ; Yokoya, N. ; Hänsch, R. ; Brown, M. ; Hager, G.D. ; Kim, H. : 2019 IEEE GRSS Data Fusion Contest: Semantic 3D Reconstruction [Technical Committees], IEEE Geoscience and Remote Sensing Magazine, March 2019