Agriculture-Vision Prize Challenge

The 3rd Agriculture-Vision Prize Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images. Submissions will be evaluated and ranked by model performance.

This year we will be hosting two challenge tracks based on two different datasets: Agriculture-Vision and CropHarvest.


The first will based on the Agriculture-Vision dataset . An example from the proposed database, which has been published in CVPR 2020, is seen in Figure 1. The “Agriculture-Vision Dataset” consists of 2643 RGB-NIR aerial images of farmlands across the US captured between 2017 and 2019. The image sizes range from 1367×1573 pixel2 to 33292×34300 pixel2 , with a resolution of up to 10 cm/pixel. A total of 12 patterns commonly found in farmlands- including weeds, end-rows, nutrient deficiency, and others- are carefully annotated by agronomy experts. To the best of our knowledge, this is the first large-scale LSHR aerial farmland dataset for visual pattern recognition.

In 2020 the competition was focused on early-season imagery and in 2021 this was extended to cover the full season. In 2022 we will be releasing 10,000 raw images including multiple views of the same field over the season to promote self-supervised learning. Data will be available for download via the AWS Registry of Open Data. This enables participants to train in the cloud without the challenge of moving large amounts of data to a personal computer if desired. The supervised component of the data is already publicly available and the additional raw images will be made public upon acceptance of the workshop.


The second will be a satellite-based global crop type classification task based on data provided by the University of Maryland and NASA Harvest. Remote sensing datasets pose a number of interesting challenges to machine learning researchers and practitioners, from domain shift (spatially, semantically and temporally) to highly imbalanced labels. In addition, the outputs of models trained on remote sensing datasets can contribute to positive societal impacts, for example in food security and climate change. However, there are many barriers that limit the accessibility of satellite data to the machine learning community, including a lack of large labeled datasets as well as an understanding of the range of satellite products available, how these products should be processed, and how to manage multi-dimensional geospatial data. To lower these barriers and facilitate the use of satellite datasets by the machine learning community, we present CropHarvest---a satellite dataset of nearly 90,000 geographically-diverse samples with agricultural labels. This data is already available online via Zenodo.

Both challenges will be tracked using Codalabs (Agri-vison, CropHarvest); this is the platform used in past Agriculture-Vision workshops. Participants for each challenge will be provided with the training and validation set and evaluated on a held-out test set.

Challenge prizes: (total 10000 $)

Track 1

    • 1st Place: $2500 USD

    • 2nd Place: $1500 USD

    • 3rd Place: $1000 USD

Track 2

    • 1st Place: $2500 USD

    • 2nd Place: $1500 USD

    • 3rd Place: $1000 USD

Challenge DataSETS


The Agriculture-Vision Challenge is a multi-class semantic segmentation model based using high-resolution aerial imagery to identify key agronomic patterns of interest. This year’s track is similar to last year's supervised challenge; models must only use the provided dataset.

Download and Submission

Please refer to the <Agriculture-Vision Challenge> page for examples and instructions.

Crop Harvest

The CropHarvest dataset is a global remote sensing dataset from a variety of agricultural land use datasets and remote sensing products.

Download and Submission

Please refer to the <CropHarvest Challenge> page for examples and instructions.

Results Submission

We will be hosting our challenge on Codalab. The competition Codalab pages can be found here (Agriculture-Vision, CropHarvest). Each participating team is required to register for the challenge. To register your team, fill out the registration form here (registration form), then register on the competition page.

*Make sure your Codalab account email matches one of the member emails in the registration form. Each team can only register once per challenge track.

The prize award will be granted to the top 3 teams for each challenge track on the leaderboard that provide a valid final submission.

All teams can have 6 submissions per day per challenge and 60 submissions per challenge in total.

To be considered as a valid submission for the prize reward, all submissions must satisfy the following requirements:

§ Model size will be limited below 150M parameters in total.

§ The metrics derived from the "results/" folder in the final submission should match the metrics on the leaderboard.

§ Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".

§ The training process of the method can be reproduced and the retrained model should have a similar performance.

§ The test set is off-limits.

§ Results that are generated from models using any other datasets will be excluded from prize evaluation. Using publicly available pre-trained weights (e.g. Imagenet, Coco) is acceptable.

Important Dates

Challenge related:

Challenge opens to the public: Feb 3, 2022 (11:59PM PDT)

Challenge paper submission deadline [proceedings]*: March 9, 2022 (11:59PM PDT)

If submitting to workshop proceedings

Challenge results submission deadline: June 3, 2022 (11:59PM PDT)

Challenge report submission deadline [non-proceedings]+: June 10, 2022 (11:59PM PDT)

If submitting for prize winnings only

Challenge awards announcements: June 19/20, 2022

NOTE: the final results submission will occur after the paper submission deadline. Teams wishing to submit challenge papers to the workshop proceedings will need to submit their results and papers by the first report submission in early March. All teams may continue to work to improve their models through the final results submission deadline in June; prize awards will be based on the final results submitted at this time. Teams placing in the top 3 who wish to be eligible for the prize awards must then submit a final report by the final submission deadline.