The 2015 IEEE GRSS Data Fusion Contest, organized by the IADF TC, was opened on December 20, 2014. The extended submission deadline was May 7, 2015. Participants submitted open topic manuscripts to two separate tracks, the 2D Contest and the 3D Contest, using the LiDAR and color data released for the competition.
31 teams worldwide participated to the Contest. Evaluation and ranking were conducted by the Award Committee. The winners of each track are reported below along with the abstracts of the submitted papers.
2D Contest
1st Place
Title: Shared feature representations of LiDAR and optical images: trading sparsity for semantic discrimination
Authors: Manuel Campos-Taberner (1), Adriana Romero (2), Carlo Gatta (3), Gustau Camps-Valls (1),
Affiliations: (1) Universitat de Valencia, Spain; (2) Universitat de Barcelona, Spain; (3) Universitat Autònoma de Barcelona, Spain.
E-mails: manuel.campos@uv.es, adriana.romero@ub.edu, cgatta@cvc.uab.es, gustau.camps@uv.es
Abstract: This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime sparsity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB + LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive colored edge filters. The joint feature representation is also more discriminative when used for clustering and topological data visualization.
2nd Place
Title: Benchmarking classification of Earth-observation data: from learning explicit features to convolutional networks
Authors: Adrien Lagrange (1), Bertrand Le Saux (1), Anne Beaupere (1), Alexandre Boulch (1), Adrien Chan-Hon-Tong (1), Stephane Herbin (1), Hicham Randrianarivo (1), Marin Ferecatu (2)
Affiliations: (1) Onera – The French Aerospace Lab, F-91761 Palaiseau, France; (2) CNAM – Cedric, 292 rue St-Martin, 75141 Paris, France
E-mails: adrien.lagrange@onera.fr, bertrand.le_saux@onera.fr, alexandre.boulch@onera.fr, adrien.chan_hon_tong@onera.fr, Stephane.Herbin@onera.fr, hicham.randrianarivo@onera.fr, marin.ferecatu@cnam.fr
Abstract: In this paper, we address the task of semantic labeling of multisource earth-observation (EO) data. Precisely, we benchmark several concurrent methods of the last 15 years, from expert classifiers, spectral support-vector classification and high-level features to deep neural networks. We establish that (1) combining multisensor features is essential for retrieving some specific classes, (2) in the image domain, deep convolutional networks obtain significantly better overall performances and (3) transfer of learning from large generic-purpose image sets is highly effective to build EO data classifiers.
3D Contest
1st Place
Title: Aerial laser scanning and imagery data fusion for road detection in city scale
Authors: Anh-Vu Vo (1), Linh Truong-Hong (1), Debra F. Laefer (1, 2)
Affiliations: (1) Urban Modelling Group, School of Civil, Structural and Environmental Engineering, University College Dublin, Ireland; (2) Earth Institute, University College Dublin, Ireland
E-mails: anh-vu.vo@ucdconnect.ie, linh.truonghong@ucd.ie, debra.laefer@ucd.ie
Abstract: This paper presents a workflow including a novel algorithm for road detection from dense LiDAR fused with high-resolution aerial imagery data. Using a supervised machine learning approach point clouds are firstly classified into one of three groups: building, ground, or unassigned. Ground points are further processed by a novel algorithm to extract a road network. The algorithm exploits the high variance of slope and height of the point data in the direction orthogonal to the road boundaries. Applying the proposed approach on a 40 million point dataset successfully extracted a complex road network with an F-measure of 76.9%.
2nd Place
Title: IEEE Data Fusion Contest: Geospatial 2D and 3D object-based classification and 3D reconstruction of iso-containers depicted in a LiDAR data set and aerial imagery of a harbor
Authors: Dirk Tiede (1), Sebastian d’Oleire-Oltmanns (1), Andrea Baraldi (1, 2)
Affiliations: (1) Department of Geoinformatics – Z GIS, University of Salzburg, Austria; (2) Dept. of Agricultural Engineering and Agronomy, University of Naples Federico II, Portici (NA), Italy
E-mails: dirk.tiede@sbg.ac.at, sebastian.doleire-oltmanns@sbg.ac.at, andrea6311@gmail.com
Abstract: Within the 2015 IEEE GRSS Data Fusion Contest extremely high resolution LiDAR point cloud must be fused with multi-spectral aerial imagery. We propose an innovative geospatial 2D and 3D object-based classification system capable of counting two populations of ISO-containers (estimated based on their standard size) located in the harbor area depicted by the two test datasets. The degree of novelty of the proposed classification system is twofold. First, it combines inductive (bottom-up, data-driven) and deductive (top-down, prior rule-based) inference mechanisms, where the latter initializes the former in a hybrid inference framework. Second, it is provided with feedback loops, which increase its robustness to changes in input data and augment its degree of automation. The geospatial outcome are tangible vector objects, which allow estimation of statistics per container together with a detailed reconstruction of the 3D scene in a GIS system.