Agriculture-Vision Challenge 2022


Agriculture-Vision

The “Agriculture-Vision Dataset” consists of RGB-NIR aerial images of farmlands across the US captured between 2017 and 2019 with a resolution of up to 10 cm/pixel. A total of 8 patterns commonly found in farmlands- including weeds, end-rows, nutrient deficiency, and others- are carefully annotated by agronomy experts. Below is an example of a full field which was annotated and used to generate the tiles which compose the challenge dataset.

Download

To download this year's challenge datasets, you will need to install AWS CLI.

Each field image in the supervised dataset has a file name in the format of (field id)_(x1)-(y1)-(x2)-(y2).(jpg/png). Each field id uniquely identifies the farmland that the image is cropped from, and (x1, y1, x2, y2) is a 4-tuple indicating the position in which the image is cropped. Please refer to our paper for more details regarding how we construct the dataset.

The supervised dataset can be downloaded with the following command:

aws s3 cp s3://intelinair-data-releases/agriculture-vision/cvpr_challenge_2021/supervised supervised --no-sign-request --recursive

By downloading this data, participants attest to the fact that the test data is not used in any way (i.e. for training, validation, or otherwise).

Evaluation metrics

We use mean Intersection-over-Union (mIoU) as our main quantitative evaluation metric, which is one of the most commonly used measures in semantic segmentation datasets. The mIoU is computed as:

Where c is the number of annotation types (c = 9 in our dataset, with 8 patterns + background), Pc and Tc are the predicted mask and ground truth mask of class c respectively.

Since our annotations may overlap, we modify the canonical mIoU metric to accommodate this property. For pixels with multiple labels, a prediction of either label will be counted as a correct pixel classification for that label, and a prediction that does not contain any ground truth labels will be counted as an incorrect classification for all ground truth labels.

Concretely, we construct the confusion matrix Mc×c with the following rules:

For each prediction x and label set Y:

      1. If xY, then My,y = My,y + 1 for each y in Y

      2. Otherwise, M x,y = M x,y + 1 for each y in Y

The mIoU is finally computed by (true_positive) / (prediction + target - true_positive), averaged across all classes.

Results Submission

Registration

We are now hosting our challenge on Codalab. The competition page can be found here (Agriculture-Vision Challenge). Each participating team is required to register for the challenge. To register your team, fill out the registration form here (registration form) and register on the competition page.

*Make sure your Codalab account email matches one of the member emails in the registration form. Each team can only register once per challenge track.

Codalab submission

All registered teams can evaluate their results on Codalab and publish their results on the leaderboard. The submission file should be a compressed .zip file that contains all prediction images. All prediction images should be in png format and the file names and image sizes should match the input images exactly. The prediction images will be converted to a 2D numpy array with the following code:

numpy.array(PIL.Image.open(‘field-id_x1-y1-x2-y2.png’))

In the loaded numpy array, only 0-8 integer labels are allowed, and they represent the annotations in the following way:

    • 0 - background

    • 1 - double_plant

    • 2 - drydown

    • 3 - endrow

    • 4 - nutrient_deficiency

    • 5 - planter_skip

    • 6 - water

    • 7 - waterway

    • 8 - weed_cluster

IMPORTANT: following our paper, the "storm_damage" category will not be evaluated.

This label order will be strictly followed during evaluation.

All teams can have 2 submissions per day and 20 submissions in total.

Final submission and prize reward

The Codalab leaderboard will be closed after the deadline. Top-tier teams in each challenge track will be invited through email to provide their final submission for the prize reward. The Final submission should also include a detailed report of the method and the necessary code to reproduce the results. If the final submission result can not be reproduced by the code provided, the participants of the respective submission will not be considered for the prize reward. The final submission will be a compressed .zip file that contains the following materials:

  • submission/

      • results/

          • (field id #1)_(x1)-(y1)-(x2)-(y2).png (label predictions that matche the best mIoU on the leaderboard)

          • (field id #2)_(x1)-(y1)-(x2)-(y2).png

          • ... etc.

      • code/ (the training and inference code for the method)

      • models/ (pretrained model (if applicable) and the final model)

      • challenge_report.pdf (detailed description of the method)


To be considered as a valid submission for the prize reward, all submissions must satisfy the following requirements:

  • Model size will be limited below 150M parameters in total.

  • The mIoU derived from the "results/" folder in the final submission should match the mIoU on the leaderboard.

  • Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".

  • The training process of the method can be reproduced and the retrained model should have a similar performance.

  • The test set is off-limits in training.

  • For fairness, teams need to specify what public datasets are used for training/pre-training their models in their challenge_ report.pdf. Results that are generated from models using private datasets, and results without such details will be excluded from prize evaluation.

The prize award will be granted to the top 3 teams for each challenge track on the leaderboard that provide a valid final submission.


NOTE: since our challenge deadline will now be after the paper submission deadline, challenge papers will no longer be accepted or included in the workshop proceedings.

References

Agriculture-Vision: A Large Aerial Image Database for Agricultural Pattern Analysis

Mang Tik Chiu*, Xingqian Xu*, Yunchao Wei, Zilong Huang, Alexander Schwing, Robert Brunner, Hrant Khachatrian, Hovnatan Karapetyan, Ivan Dozier, Greg Rose, David Wilson, Adrian Tudor, Naira Hovakimyan, Thomas S. Huang, Honghui Shi

UIUC, IntelinAir, University of Oregon

The 1st Agriculture-Vision Challenge: Methods and Results

Mang Tik Chiu*, Xingqian Xu*, Kai Wang, Jennifer Hobbs, Naira Hovakimyan, Thomas S. Huang, Honghui Shi

UIUC, IntelinAir, University of Oregon