CropHarvest Challenge 2022

Download

A sample notebook and examples for downloading the data from Zenodo are available here.

Train, validation, and test data is available for download via the site above. By downloading this data, participants attest to the fact that the test data is not used in any way (i.e. for training, validation, or otherwise). Kenya maize test set will be used as test set for the challenge.


CropHarvest

Remote sensing datasets pose a number of interesting challenges to machine learning researchers and practitioners, from domain shift (spatially, semantically and temporally) to highly imbalanced labels. In addition, the outputs of models trained on remote sensing datasets can contribute to positive societal impacts, for example in food security and climate change. However, there are many barriers that limit the accessibility of satellite data to the machine learning community, including a lack of large labeled datasets as well as an understanding of the range of satellite products available, how these products should be processed, and how to manage multi-dimensional geospatial data. To lower these barriers and facilitate the use of satellite datasets by the machine learning community, we present CropHarvest---a satellite dataset of nearly 90,000 geographically-diverse samples with agricultural labels.



Visualizations (Annotated Images)

Each sample is a 12-month time series in which each month contains 18 features representing the aggregated values for a 30-day window from April to March from four different remote sensing datasets: Sentinel-2 optical multispectral (11 bands), Sentinel-1 synthetic aperture radar (2 bands), ERA5 climatological variables (2 bands), SRTM DEM topographic variables (2 bands), and Sentinel-2 NDVI (1 band). Each sample has a label of either maize or not-maize, where not-maize could be crops other than maize (e.g., wheat) or a non-crop landcover (e.g., water or forest).

The plots below show the mean and standard deviation in each band for the maize and not-maize samples in the training set. Note that the topographic variables (slope and elevation) are constant in time.


Evaluation metrics

We use F1 score as our main quantitative evaluation metric. The F1 score is computed as follows:

Results Submission

Registration

We are now hosting our challenge on Codalab. The competition page can be found here (CropHarvest challenge). Each participating team is required to register for the challenge. To register your team, fill out the registration form here (registration form) and register on the competition page.

*Make sure your Codalab account email matches one of the member emails in the registration form. Each team can only register once per challenge track.

Codalab submission

All registered teams can evaluate their results on Codalab and publish their results on the leaderboard. We will use the evaluation funtion provided in CropHarvest package for scoring. The submission file should be a compressed .zip file of a folder named result that contains all prediction NetCDF files (.nc extention). Follow this notebook to learn more.

This label order will be strictly followed during evaluation.

All teams can have 6 submissions per day and 60 submissions in total.

Final submission and prize reward

The Codalab leaderboard will be closed after the deadline. Top-tier teams for the CropHarvest challenge track will be invited through email to provide their final submission for the prize reward. The Final submission should include a detailed report of the method and the necessary code to reproduce the results. If the final submission result can not be reproduced by the code provided, the participants of the respective submission will not be considered for the prize reward. The final submission will be a compressed .zip file that contains the following materials:

  • submission/

      • result/

  • .nc files

      • code/ (the training and inference code for the method)

      • models/ (pretrained model (if applicable) and the final model)

      • challenge_report.pdf (detailed description of the method)


To be considered as a valid submission for the prize reward, all submissions must satisfy the following requirements:

  • Model size will be limited below 150M parameters in total.

  • The F1 score derived from the "results/" folder in the final submission should match the F1 score on the leaderboard.

  • Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".

  • The training process of the method can be reproduced and the retrained model should have a similar performance.

  • The test set is off-limits in training.

  • For fairness, teams need to specify what public datasets are used for training/pre-training their models in their challenge_ report.pdf. Results that are generated from models using private datasets, and results without such details will be excluded from prize evaluation.

The prize award will be granted to the top 3 teams for the challenge track on the leaderboard that provide a valid final submission.


NOTE: since our challenge deadline will now be after the paper submission deadline, challenge papers will no longer be accepted or included in the workshop proceedings.

References

CropHarvest: A global dataset for crop-type classification.

Gabriel Tseng, Ivan Zvonkov, Catherine Nakalembe, Hannah Kerner

University of Maryland, NASA Harvest, MILA

Citation:

Tseng, G., Zvonkov, I., Nakalembe, C. L., & Kerner, H. (2021). CropHarvest: A global dataset for crop-type classification. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.