Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 1

Fig. 1 shows the improved training phase.

As opposed to V1, we considered imagery


and reference data from the whole European continent. All imagery was downloaded
through Sentinel Hub APIs. We used the AI4Boundaries and EuroCrops label datasets
as source of ground-truth, and sampled locations to obtain stratified samples
across geographical locations and crop-types. To reduce the sources of noise
present in the training dataset, we applied two new steps, namely co-registration
of the image series, and automatic curation of the reference data. Finally, we
added a super-resolution layer to our U-net architecture to produce estimates of
extent and boundaries at 4x smaller pixel size than the input image — for example,
given the four B02, B03, B04, and B08 Sentinel-2 bands at 10 m pixel size, the
network estimates parcel boundaries at 2.5 m. The neural network is then trained
following hyper-parameter optimization. And ta-da, the weights of the V2
architecture are ready for inference.
Fig 2. Schematic representation of the inference phase of FD V2. New or improved
steps are shown in red.

Fig. 2 shows an exemplary scheme for the inference phase. Given the area-of-
interest (AOI) and time-of-interest (TOI), we forward a batch processing API
request to Sentinel Hub, which scales the download of the desired input imagery.
For each cell of the grid used by the service, we optionally co-register images to
reduce misalignments, we normalize them to center the reflectance values to the
sensitive range of the deep network, and finally we input them to the U-net network
with the pre-trained weights. The network outputs the estimates for the extent,
boundary and distance to boundary of each detected agricultural parcel for each
input timestamp of the time-series. We then temporally and spatially merge such
estimates to obtain a single image which is contoured to derive the polygons
outlining each agricultural parcel. Polygons estimated for each cell in the grid
are then harmonized and merged to obtain a single vector map covering the input
AOI.

You might also like