Agriculture-Vision Prize Challenge
The 1st Agriculture-Vision Prize Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images. Submissions will be evaluated and ranked by model performance. Top three performing submissions will receive prize rewards and presentation opportunity at our workshop.
Our workshop will be held on 06/14/2020.
Please check our Github repo for more details regarding the challenge dataset, methods and results.
1st Place: USD $5000
2nd Place: USD $3000
3rd Place: USD $2000
Other perks (possible internship opportunity)
We use mean Intersection-over-Union (mIoU) as our main quantitative evaluation metric, which is one of the most commonly used measures in semantic segmentation datasets. The mIoU is computed as:
Where c is the number of annotation types (c = 7 in our dataset, with 6 patterns + background), Pc and Tc are the predicted mask and ground truth mask of class c respectively.
Since our annotations may overlap, we modify the canonical mIoU metric to accommodate this property. For pixels with multiple labels, a prediction of either label will be counted as a correct pixel classification for that label, and a prediction that does not contain any ground truth labels will be counted as an incorrect classification for all ground truth labels.
Concretely, we construct the confusion matrix Mc×c with the following rules:
For each prediction x and label set Y:
If x ⊆ Y, then My,y = My,y + 1 for each y in Y
Otherwise, M x,y = M x,y + 1 for each y in Y
The mIoU is finally computed by (true_positive) / (prediction + target - true_positive), averaged across all classes.
Note: We have updated our result submission policy. A competition server and an official leaderboard are going to be hosted on Codalab. All participating teams are required to register for the challenge on Codalab and the prize reward will be distributed according to the public leaderboard. The submission details are described below.
We are now hosting our challenge on Codalab. The competition page can be found here. Each participating team is required to register for the challenge. To register your team, fill out the registration form here, then register on the competition page.
*Make sure your Codalab account email matches one of the member emails in the registration form. Each team can only register once.
This evaluation server will remain open to encourage further research in Agriculture-Vision. No registration is required.
All registered teams can evaluate their results on Codalab and publish their results on the leaderboard. The submission file should be a compressed .zip file that contains all prediction images. All prediction images should be in png format and the file names and image sizes should match the input images exactly. The prediction images will be converted to a 2D numpy array with the following code:
In the loaded numpy array, only 0-6 integer labels are allowed, and they represent the annotations in the following way:
0 - background
1 - cloud_shadow
2 - double_plant
3 - planter_skip
4 - standing_water
5 - waterway
6 - weed_cluster
This label order will be strictly followed during evaluation.
All teams can have 2 submissions per day and 20 submissions in total.
Final submission and prize reward
The Codalab leaderboard will be closed after the deadline. Top-tier teams will be invited through email to provide their final submission for the prize reward. The final submission will be a compressed .zip file that contains the following materials:
(field id #1)_(x1)-(y1)-(x2)-(y2).png (label predictions that matche the best mIoU on the leaderboard)
(field id #2)_(x1)-(y1)-(x2)-(y2).png
code/ (the training and inference code for the method)
models/ (pretrained model (if applicable) and the final model)
challenge_report.pdf (detailed description of the method)
Note: Submission terms can be found here.
To be considered as a valid submission for the prize reward, all submissions must satisfy the following requirements:
Model size will be limited below 150M parameters in total.
The mIoU derived from the "results/" folder in the final submission should match the mIoU on the leaderboard.
Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".
The training process of the method can be reproduced and the retrained model should have a similar performance.
The test set is off-limits in training.
For fairness, teams need to specify what public datasets are used for training/pre-training their models in their challenge_ report.pdf. Results that are generated from models using private datasets, and results without such details will be excluded from prize evaluation. (Results using private datasets can still be included in the report.)
The prize award will be granted to the top 3 teams on the leaderboard that provide a valid final submission.
Challenge reports and papers
All participants are welcomed to submit their reports and share their findings.
We plan to publish a challenge paper summarizing the methods and results for this challenge. Reports emailed to us before 04/15/2020 will have a chance to be included in our challenge paper given that the reported method is noteworthy and novel. Authors of the included reports may also be invited as the co-authors of our challenge paper and edit the parts in the challenge paper that they contribute to.
Alternatively, participants can submit papers related to this challenge to our workshop paper track before the workshop paper deadline 03/30/2020.