Agriculture-Vision Prize Challenge

The 4rd Agriculture-Vision Prize Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images. Submissions will be evaluated and ranked by model performance.


Introduction to the Datasets:

For this year's CVPR Challenge, we are introducing a dynamic combination of two distinct datasets:


1. Extended Agriculture Dataset: This is a collection of 105 GB of raw, unprocessed agricultural images. These images are unique as they encompass a range of spectral channels – red, green, blue, and near-infrared (NIR). Crucially, this dataset is unlabeled, providing a rich ground for semi-supervised learning experimentation.


2. Original Agriculture Vision Dataset: This dataset is a compilation of processed and labeled images used in previous CVPR challenges. It serves as a foundational dataset with structured and annotated information.

Download labeled data from AWS 


Challenge Overview:

The core of this challenge is to strategically use both the extended, unlabeled agriculture vision dataset and the labeled original agriculture vision dataset. The aim is to enhance model performance by effectively applying semi-supervised learning techniques. Participants are challenged to integrate the depth and variety of the unlabeled dataset with the structured framework of the labeled dataset to achieve superior results.


Evaluation Criteria:

The performance of the developed models will be evaluated using two key sets from the previous agriculture vision datasets:



We anticipate that the incorporation of the additional unlabeled dataset will significantly enhance the outcomes, leading to more robust and efficient models. This challenge not only tests the limits of semi-supervised learning in agricultural vision but also paves the way for groundbreaking applications in sustainable agriculture practices.


The challenge will be tracked using Codalabs Page; this is the platform used in past Agriculture-Vision workshops. Participants for each challenge will be provided with the training and validation set and evaluated on a held-out test set. 

Challenge prizes: (total 5000 $)

Results Submission

We will be hosting our challenge on Codalab. The competition Codalab pages can be found here. Each participating team is required to register for the challenge. To register your team on the competition page with your contact email.

*Make sure your Codalab account email matches one of the member email. Each team can only register once .

 

The prize award will be granted to the top 3 teams for the challenge track on the leaderboard that provide a valid final submission.

All teams can have 10 submissions per day per challenge and 80 submissions per challenge in total.

To be considered as a valid submission for the prize reward, all submissions must satisfy the following requirements:

§   The metrics derived from the "results/" folder in the final submission should match the metrics on the leaderboard.

§   Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".

§   The training process of the method can be reproduced and the retrained model should have a similar performance.

§   The test set is off-limits.

§  Results that are generated from models using any other datasets will be excluded from prize evaluation.  Using publicly available pre-trained weights (e.g. Imagenet, Coco) is acceptable.

Important Dates 

Challenge related:

Challenge opens to the public: Feb 1, 2024 (11:59PM PDT)

 

Challenge paper submission deadline [proceedings]*: March 15, 2024 (11:59PM PDT)

If submitting to workshop proceedings

 

Challenge results submission deadline: June 3, 2024 (11:59PM PDT)

 

Challenge report submission deadline [non-proceedings]+: June 10, 2024 (11:59PM PDT)

If submitting for prize winnings only

 

Challenge awards announcements: June 19/20, 2024


NOTE: the final results submission will occur after the paper submission deadline.  Teams wishing to submit challenge papers to the workshop proceedings will need to submit their results and papers by the first report submission in early March.  All teams may continue to work to improve their models through the final results submission deadline in June; prize awards will be based on the final results submitted at this time.  Teams placing in the top 3 who wish to be eligible for the prize awards must then submit a final report by the final submission deadline.