Multi-modal, Multi-scale Representation Learning for Satellite Imagery Analysis Just Needs a Good ALiBi
Abstract
Vision foundation models have been shown to be effective at processing satellite imagery into representations fit for downstream tasks, however, creating models which operate over multiple spatial resolutions and modes is challenging. This paper presents Scale-ALiBi, a linear bias transformer attention mechanism with a spatial encoding bias to relationships between image patches at different ground sample distance scales. We provide an implementation of Scale-ALiBi over a dataset of aligned high- and low-resolution optical and low-resolution SAR satellite imagery data using a triple-contrastive and reconstructive architecture, show an improvement on the GEO-Bench benchmark, and release the newly curated dataset publicly.
Architecture
Figure. The Scale-ALiBi architecture. Read the full paper here.
Datasets
Alongside the Scale-ALiBi model, we release the dataset used in training. The dataset is generated by processing Sentinel-1 and Sentinel-2 images, segmenting them into XYZ tiles of 256x256 pixel resolution. Sentinel-2’s true-color images are directly segmented, while Sentinel-1 images undergo scaling and band manipulation to create 8-bit optical-like representations. High-resolution images from NAIP are used to match the same tiles, and the next Y-level down is included as well. Due to the constraints of NAIP, the dataset is geographically limited to the continental U.S. and Puerto Rico, with smaller regions selected for coverage based on diversity and scale. Various dataset sizes are made available, see below for download links.