Agriculture-Vision Prize Challenge

The 2nd Agriculture-Vision Prize Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images. Submissions will be evaluated and ranked by model performance.

This year, we will be hosting two challenge tracks: supervised track and semi-supervised track. The top three performing submissions in each challenge track will receive prize rewards and presentation opportunities at our workshop.

NOTE: we have delayed the challenge to start on April 15 and end on Jun 5 due to extra data collection. We are also happy to increase the total cash prize to 20,000$ with support from Intelinair, Microsoft and Indigo.

Challenge prizes: (total 20,000$)

Supervised track

    • 1st Place: USD $10000

    • 2nd Place: USD $3000

    • 3rd Place: USD $2000

Semi-supervised track

    • 1st Place: USD $3000

    • 2nd Place: USD $1200

    • 3rd Place: USD $800

Dataset Download

Please refer to the Dataset page for information on how to download this year's challenge datasets.

Challenge track requirements

Supervised track

The supervised track is similar to last year's challenge. Models trained on the supervised track must only use the provided supervised dataset and not the raw dataset. Nevertheless, other public datasets (exempt Agriculture-Vision dataset and the dataset from last year's competition) can still be used for training/pre-training the model. Please refer to the Final Submission section for more details.

Semi-supervised track

In addition to the supervised dataset, participants are also allowed to make use of the raw image dataset for training. How to use the raw dataset is up to the participants. The same test set data in the supervised track will be used for evaluation.

We do not enforce methods of preprocessing raw dataset images. Instead, participants are welcome to refer to our paper on how to convert these images into the format used in the supervised dataset.

Evaluation metrics

We use mean Intersection-over-Union (mIoU) as our main quantitative evaluation metric, which is one of the most commonly used measures in semantic segmentation datasets. The mIoU is computed as:

Where c is the number of annotation types (c = 9 in our dataset, with 8 patterns + background), Pc and Tc are the predicted mask and ground truth mask of class c respectively.

Since our annotations may overlap, we modify the canonical mIoU metric to accommodate this property. For pixels with multiple labels, a prediction of either label will be counted as a correct pixel classification for that label, and a prediction that does not contain any ground truth labels will be counted as an incorrect classification for all ground truth labels.

Concretely, we construct the confusion matrix Mc×c with the following rules:

For each prediction x and label set Y:

      1. If xY, then My,y = My,y + 1 for each y in Y

      2. Otherwise, M x,y = M x,y + 1 for each y in Y

The mIoU is finally computed by (true_positive) / (prediction + target - true_positive), averaged across all classes.

Results Submission

Registration

We are now hosting our challenge on Codalab. The competition page can be found here (supervised/semisupervised) (link release April 20). Each participating team is required to register for the challenge. To register your team, fill out the registration form here (link release late April 20), then register on the competition page.

*Make sure your Codalab account email matches one of the member emails in the registration form. Each team can only register once per challenge track.

Codalab submission

All registered teams can evaluate their results on Codalab and publish their results on the leaderboard. The submission file should be a compressed .zip file that contains all prediction images. All prediction images should be in png format and the file names and image sizes should match the input images exactly. The prediction images will be converted to a 2D numpy array with the following code:

numpy.array(PIL.Image.open(‘field-id_x1-y1-x2-y2.png’))

In the loaded numpy array, only 0-8 integer labels are allowed, and they represent the annotations in the following way:

    • 0 - background

    • 1 - double_plant

    • 2 - drydown

    • 3 - endrow

    • 4 - nutrient_deficiency

    • 5 - planter_skip

    • 6 - water

    • 7 - waterway

    • 8 - weed_cluster

IMPORTANT: following our paper, the "storm_damage" category will not be evaluated.

This label order will be strictly followed during evaluation.

All teams can have 2 submissions per day and 20 submissions in total.

Final submission and prize reward

The Codalab leaderboard will be closed after the deadline. Top-tier teams in each challenge track will be invited through email to provide their final submission for the prize reward. The final submission will be a compressed .zip file that contains the following materials:

  • submission/

      • results/

          • (field id #1)_(x1)-(y1)-(x2)-(y2).png (label predictions that matche the best mIoU on the leaderboard)

          • (field id #2)_(x1)-(y1)-(x2)-(y2).png

          • ... etc.

      • code/ (the training and inference code for the method)

      • models/ (pretrained model (if applicable) and the final model)

      • challenge_report.pdf (detailed description of the method)


To be considered as a valid submission for the prize reward, all submissions must satisfy the following requirements:

  • Model size will be limited below 150M parameters in total.

  • The mIoU derived from the "results/" folder in the final submission should match the mIoU on the leaderboard.

  • Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".

  • The training process of the method can be reproduced and the retrained model should have a similar performance.

  • The test set is off-limits in training.

  • For fairness, teams need to specify what public datasets are used for training/pre-training their models in their challenge_ report.pdf. Results that are generated from models using private datasets, and results without such details will be excluded from prize evaluation.

The prize award will be granted to the top 3 teams for each challenge track on the leaderboard that provide a valid final submission.


NOTE: since our challenge deadline will now be after the paper submission deadline, challenge papers will no longer be accepted or included in the workshop proceedings.

Important Dates

Challenge related:

Challenge opens to public: Apr 15, 2021

Challenge results submission deadline: Jun 5, 2021

Challenge awards announcement, prize winners presentations, June 20, 2021