PARC AI League Challenge: Mining Innovation Challenge (MIC)¶
Overview¶
The AI League Challenge for PARC 2025 focuses on Innovation in Mining, challenging teams to develop AI-driven solutions for the mining industry. This competition provides a platform for young innovators to explore AI applications in the mining industry while gaining valuable research, collaboration, and presentation experience.
The competition consists of two phases: Qualifications Round and Finals. Below is a summary of the competition timeline and required deliverables at each stage.
Qualifications Round (Virtual)¶
Teams must complete six deliverables to be considered for the finals. The top 10 teams with the highest cumulative scores from all deliverables will advance.
Deliverables:
-
Concept Paper – Outline the initial idea and problem statement.
-
Proposal Report – Detailed explanation of the proposed AI solution.
-
Proposal Presentation – Virtual presentation of the solution.
-
Peer Review – Teams review and provide feedback on each other’s work.
-
Final Report – Comprehensive documentation of project development and findings.
-
Final Presentation – Virtual presentation summarizing the project.
Finals (In-Person, July 2025 – Senegal)¶
The top 10 finalists will compete in person, where their AI models and research will be judged on two final deliverables.
Deliverables:
-
Final Presentation – In-person presentation to the judging panel includes powerpoint, poster, and physical print out of final report.
-
Model Test Results – Evaluation of AI solution performance.
Awards¶
- 1st Place Prize: $2,000
Competition Timeline¶
- 3/24: Qualifying round begins; rules posted
- 4/5: Concept One-Pager & agreement due (disqualification if missed)
- 4/23: Proposal Report due (50 pts; -10 pts/day late)
- 4/26-4/27: Virtual Proposal Presentation (50 pts; 10 min max, penalties for exceeding time limit or lack of participation from all members)
- 5/7: Peer Review Feedback due (10 pts; required to receive full credit)
- 5/21: Final Report & Project Code due (100 pts; -10 pts/day late)
- 5/24-5/25: Virtual Final Presentation (50 pts; 10 min max, penalties for exceeding time limit or lack of participation from all members)
- 6/1: Top 10 finalists announced
- 6/9: Top 10 Finalists Team Registration Fee Due
- 6/29: Final Report & Project Code submission (optional; no additional scoring).
- 7/2025: Live presentations at PARC 2025 in Senegal (50 pts).
Teams will be notified if there are any adjustments made to the dates.
Due | Activity | Scoring |
---|---|---|
3/24 | Qualifying Round: Competition Begins & Rules Posted | 0 pt |
4/5 | Concept One Pager & Agreement Due | 0 pt |
4/23 | Proposal Report Due | 50 pts |
4/26 - 4/27 | Virtual Proposal presentation | 50 pts |
5/7 | Submit Peer Review Feedback | 10 pts |
5/21 | Final report and Project code Due | 100 pts |
5/24 - 5/25 | Virtual Final presentation for Qualifiers Round | 50 pts |
6/1 | Final Round: Top 10 finalists are announced | 0 pt |
6/9 | Top 10 finalists team registration fee are due | 0 pt |
6/29 | Submission of Final Report and Project Code (Optional) | 0 pt |
7/2025 | Live Presentations at PARC 2025 Competition in Senegal | 115 pts |
Qualifying Rounds¶
Virtual Kickoff Event | 3/26/25¶
A virtual kickoff event will take place on March 26 to introduce the challenge and provide an opportunity for participating teams to ask questions. During this session, we will go over the competition details, rules, and expectations to ensure all teams are well-prepared. Attending the session is optional, but it will be recorded and made available for all teams to access at their convenience.
Concept Submission | Due: 4/5/25¶
Your team must select a paper from a top-tier conference or journal, and each paper can only be used by one team. Use this tracker where you can enter your selected paper. Selection is on a first-come, first-served basis.
Your team must submit the concept one-pager by 4/5/2025 at 8PM GMT to remain eligible for the competition. Failure to submit by the deadline will result in disqualification. Submit the concept one-pager as a Word document via email to info@parcrobotics.org and cc atapo@parcrobotics.org.
If you need ideas on machine learning fields of research, here is a non-comprehensive suggestion of categories to look for:
-
Computer Vision (CV): image classification, object detection, image to text, etc.
-
Natural Language Processing (NLP): text summarization, question answering, etc.
-
Meta Learning: transfer learning, few-shot learning, etc.
Your Concept One-Pager must include the following:
-
Title & Citation
- Clearly state the title of the selected paper and provide a full citation, including authors, journal/conference name, and publication year.
-
Summary of the Paper
- Explain the key problem addressed in the paper.
- Summarize the main findings, methods, and results.
-
Relevance to the Competition
- Describe how this paper relates to the competition theme of innovation in mining and its real-world applications.
-
Your Team’s Approach
- Explain how your team plans to build on or apply the concepts from the paper.
- Identify any modifications, improvements, or unique insights your team will explore.
-
Potential Challenges & Next Steps
- Highlight potential difficulties in implementing the paper’s methods.
- Outline the next steps your team will take to test and validate ideas.
Along with the concept one pager, all teams are required to sign and submit the agreement that all work created and submitted for the AI League Competition is their intellectual property. However, they grant the Pan-African Robotics Competition (PARC) the right to showcase, reproduce, and distribute their work for promotional, educational, and archival purposes. Click here to view agreement. Failure to sign and submit this agreement along with the concept one pager by April 5, 2025, will result in disqualification from the competition.
Project Proposal Report | Due: 4/23/25 | Scoring Value: 50¶
The project proposal’s main goal is to build upon existing results of the chosen model, e.g., Neural Network. Submit the project proposal as a Word document via email to info@parcrobotics.org and cc atapo@parcrobotics.org and the email of your assigned peer review team (to be announced) by the due date 4/23/25 8PM GMT. The proposal will be scored out of 50 points, click here to view the judging rubric and scoring scale. 10 points will be subtracted for each day that the submission is late. The proposal document should be six pages long, with each section on a separate page (11pt font, 1 inch margins). Teams must use the following sections to format their proposal:
-
Section 1: Task Definition, Evaluation Protocol, and Data. Capture the intended task, dataset, and metrics, and include an associated reference.
-
Section 2: Identify the learning model to be utilized, e.g., Neural Network Machine Learning Mode/l, along with the associated references. A draft outline of how the model will be summarized in this section is required, along with a description of any figures or tables to be used. Model constraints include: If a framework does not run relatively easily for your team after carefully reading the documentation on how to install, run, and retrain it, or it is difficult to see where parameters and hyper-parameters might be changed - do not use such model(s).
-
Section 3: Experiment Design. A table briefly sketching the research question(s), variables, and hypotheses, as described in the write-up requirements below, along with a bullet-point summary of expected modifications/code needed to run your experiment.
-
Section 4: Experimental Results and Discussion. Summary of the results that will be collected, and the tables and figures that you will use to present results. How results will be used to test your hypothesis, and what you expect to learn about your research question(s) if the hypotheses are (a) confirmed, (2) contradicted, or (3) not clearly confirmed or contradicted. We want to avoid outcome (3) - thinking about the possibility often helps improve our experiment designs.
-
Section 5: References. At least 1 page, with references for Sections 1, 2, and optionally Section 3.
-
Section 6: Viability Test: This is intended to confirm that you will be able to work with your intended model. Provide output or a screenshot showing that you are able to run the model as provided in the framework that you are using. Also include the time needed to run the model on the test set, and the test set size for your task. The model does not need to be fit to the whole data. Provide a second output or screenshot that clearly shows that you are able to train the model for 1 epoch over the dataset. Also include the time required to train for the one epoch, and the number of training samples.
Virtual Proposal Presentation | Scheduled for 4/26 – 4/27/25 | Scoring Value: 50¶
The proposal presentation will take place online to be scheduled on 4/26 or 4/27, teams may select their preferred time slot on a first come first serve basis. Each team is allotted 15 minutes for their presentation of which 10 minutes maximum to present a summary of their project work to date (this will be timed to not go above 10 minutes) and 5 minutes to answer questions from the judges. All team members should speak for at least 2 minutes. The presentation will be scored out of 50 points, click here to view the judging rubric and scoring scale. 2 points will be subtracted for every minute the team speaks beyond 10 minutes. Additionally 2 points will be subtracted for every team member that does not present for the minimum requirement of 2 minutes.
Talking elements:
- Learning task and research question(s)
- Learning model
- Experiment design (including dataset)
- Preliminary results, how they relate to the research question(s)
Peer Review Process for Proposal Feedback | Due: 5/7/2025 | Scoring Value: 10 points¶
As part of the competition, each team will be randomly paired with another team for a peer review of their proposal. This process ensures that teams receive constructive feedback before submission. Each team is expected to provide a one-page peer review of their paired team’s proposal. The review should be constructive and focused on helping the team improve their work. Teams receive 10 points for sending the peer review to their paired team.
Your review should include the following sections:
-
What Worked Well – Identify strengths in the proposal. Highlight well-explained sections, clear research questions, strong methodology, or effective use of figures and tables.
-
Areas for Improvement – Provide constructive feedback on sections that need more clarity, additional details, or better organization. Suggest ways to improve explanations, experiment design, or presentation.
-
Overall Suggestions – Give high-level recommendations on how the team can refine their proposal before final submission. This may include improving formatting, strengthening arguments, or adding missing references.
Be professional and thoughtful in your feedback. The goal is to help your peers strengthen their work while also improving your own analytical and review skills.
Peer Review Requirements & Deadlines:
-
Proposal Submission for Review – Each team must submit their proposal to their paired team by April 23, 2025.
- Failure to submit by this deadline will result in a 10-point deduction.
-
Providing Feedback – Each team must review their paired team’s proposal and provide constructive feedback by May 7, 2025 (8 PM GMT).
- Failure to provide feedback by this deadline will result in a 10-point deduction.
Penalty & Automatic Points:
-
If a team does not receive a proposal from their paired team, they will not be penalized for failing to provide feedback.
-
If a team does not receive feedback on their proposal by May 7 (8 PM GMT), they will automatically receive 10 points to compensate for the inconvenience.
This peer review process is essential for improving the quality of proposals and ensuring teams refine their work before final submission.
Final Report and Project Code | Due: 5/21/25 | Scoring value: 100¶
The report and code are expected to be prepared by the team as a whole, e.g., all participants should contribute to both the code and the final report. Note that for the Part 1: Final Report Writeup, the technical depth, analysis, and clarity of the writing (including design, placement and formatting of figures and graphics) will be roughly equally weighted factors in the assigned score. For evaluating Part 2: Final Code/Implementation, the thoughtful use of existing operations built into the framework, code organization, and style will be factors in scoring.
Part 1: Final Report Writeup¶
The final project report due 5/21/25 at 8PM GMT will be scored out of 50 points, click here to view the judging rubric and scoring scale.10 points will be subtracted for each day that the submission is late. Submit the final report writeup as a Word document via email to info@parcrobotics.org and cc atapo@parcrobotics.org. Below are the required sections for the writeup, along with notes regarding requirements for each section. The writeup must be 8-9 pages in length, with 11pt font, 1 inch margins, including all content such as figures, tables, graphics, and the single page of references. The following page lengths are encouraged, but may vary at the team’s discretion.
1. Task Definition (Project description), Evaluation Protocol, and Data (1 page)¶
With one or two figures illustrating the task, and how evaluation is performed. Include a reference for a paper or book defining the task and dataset - preferably from those who created the dataset in the form you are using. Cite this paper in your discussion, summarizing any other pertinent details of interest related to the task definition.
2. Neural Network / Machine Learning Model (2 pages)¶
- Neural Network Learning Model Summary
- Remember to include the loss metric used in training model.
-
Focus on defining the model clearly, explaining key pieces of the parts.
-
Use figures where it will aid understanding. Figures created by the teams are preferred; where figures, tables, etc. are taken from other documents, they must be explicitly cited so this is clear.
-
Focus your presentation on the parts of the algorithm that you will modify, to help motivate and provide context for your experiment.
-
1-3 reference(s) defining the model Cite and summarize this in your discussion.
3. Experiment (2 pages)¶
-
The research question(s) that your experiment addresses (but does not necessarily answer) - Put another way, what do you hope to learn from the experiment?
-
Design:
-
Explicitly identify (organization in subsections and/or tables is fine):
-
Hypothesis: a falsifiable statement about the expected outcome of the experiment based on your understanding of the learning model, which is clearly motivated by your research question(s). This should be closely tied to the pertinent mathematical, algorithmic, and data/storage properties of the model associated with your research question(s).
-
Independent variables (Experimental Settings) that you will manipulate, e.g., hyper-parameters, model form, other learning parameters, etc.,
-
Control variables (Biases and Modeling assumptions) that will be held constant, but might alter the experiment outcome if this was not true, and
-
Dependent variables (Results Analysis) for observations made during and after systems are trained, e.g., performance metrics, learning curves, convergence intervals in number of epochs, etc.
-
-
Methodology:
-
Identify the specific implementation/code base used, any required data processing, and a summary of modification/ code required for your available implementation to create and run the different conditions of your experiment.
-
Requirements:
-
You must make use of a baseline that you will compare your modifications (‘conditions’) against. This can include modifications to the original model, different dataset or different experimental settings. The choice of baseline must be motivated by your research question(s) and the model(s) involved.
-
Your research question(s) can be simple, but must be focused on building understanding of a learning model. “Will A perform better than B?” asks for a single observation in isolation; it does not test expected behavior based on a formal understanding (‘model’), and so is not a scientific research question.
-
The experiment should contain at least 3 conditions for one variable (not including grid-search or other methods to tune hyper-parameters for each condition), to keep your effort excused and manageable in the available time frame. The conditions should be defined by changing one variable, e.g., network architecture, embedding size, different activation functions, etc.
4. Experimental Results and Discussion (2-3 pages)¶
Your analysis should read like a clear narrative - roughly, a well-guided tour of the results, whether they confirm or contradict your hypothesis, and what this tells you about your research question(s).
-
Numeric results from your experiments in tables and/or figures. Include specific metric values wherever possible, e.g., at the top of bars in bar graphs.
-
Visualizations of results where helpful, e.g., learning curves, tables of metics, etc.
-
A discussion of whether these results support or contradict your hypothesis, and how this informs your understanding of the original research question(s), and possible next steps.
5. References (1 page)¶
Part 2: Final Code/Implementation¶
The Final Code/ implementation also due 5/21/25 at 8PM GMT will be scored out of 50 points, click here to view the judging rubric and scoring scale. Submit the final code/implementation via email to info@parcrobotics.org and cc atapo@parcrobotics.org. 10 points will be subtracted for each day that the submission is late. A .zip file containing your code, along with a README explaining how to install and run your system on a Linux/Mac/Windows machine is required. If your code requires a GPU, make sure you include this requirement as part of your README. Participants are strongly encouraged to start from an existing framework. A (highly) partial list of possible frameworks include:
- Tensorflow
- PyTorch
- Etc - In choosing their research question(s)/topic(s), teams are strongly encouraged to download and play with a framework or two that look interesting, try some of the provided examples.
Virtual Final Presentation | Scheduled for 5/24 – 5/25/25 | Scoring Value: 50¶
The final presentation will take place online to be scheduled on 5/24 or 5/25, teams may select their preferred time slot on a first come first serve basis. Each team is allotted 15 minutes for their presentation of which 10 minutes maximum to present a summary of their final work (this will be timed to not go above 10 minutes) and 5 minutes to answer questions from the judges. All team members should speak for at least 2 minutes.
The presentation will be scored out of 50 points, click here to view the judging rubric and scoring scale. 2 points will be subtracted for every minute the team speaks beyond 10 minutes. Additionally 2 points will be subtracted for every team member that does not present for the minimum requirement of 2 minutes.
Talking elements:
-
Learning task and research question(s)
-
Learning model
-
Experiment design (including dataset)
-
Preliminary results, how they relate to the research question(s)
Top 10 Finalists¶
Top 10 Finalists Team Registration Fee| Due: 6/9/25¶
The top 10 finalists competing in Senegal must submit their $250 USD team registration fee to PARC by June 9, 2025, to secure their spot. This is a one-time payment covering the entire team’s registration. In return, PARC will provide each team with lodging, daily meals (breakfast, lunch, and dinner), transportation to and from the airport, and transportation between the dormitory and competition arena.
Teams are responsible for covering any additional expenses including passport and visa fees, travel costs to Senegal, and any optional materials such as custom team t-shirts or a national flag to represent their home country at PARC.
Submission of Final Report & Code | Due: 6/29/25¶
Finalists have the option to submit the same final report and code/implementation from the qualifiers, or submit an improved version. While updates and refinements are encouraged, the final submission will not be scored, and no additional points will be awarded. There is no penalty for not submitting a new version; in that case, the team’s qualifiers submission will be used by default. The team’s final score will be calculated by adding their qualifier round score to their Final round score from the live presentations in July.
Final Live Presentations | Due: Live at PARC 2025 Competition | Scoring Value: 115¶
The final presentation will be done in-person during PARC 2025. Each team will have a designated table and area to display their work. Teams are encouraged to bring a poster that visually summarizes the four key talking points from their presentation, as well as a printed copy of their final report. These deliverables are intended to help showcase the team’s project to judges and the audience. The posters will be scored out of 15 points, click here to view the judging rubric and scoring scale. For the live presentations, each team is allotted 7 minutes for their PowerPoint presentation, (this will be timed to not go above 7 minutes) and 5 minutes to answer questions from the judges. Each team member is required to speak for at least 1 minute. As similar to previous presentations, your final presentation should include:
-
Learning task and research question(s)
-
Learning model(s)
-
Experiment design (including dataset and experiment settings modifications from the proposal)
-
Results, how they relate to the research question
The presentation will be scored out of 50 points, click here to view the judging rubric and scoring scale. 2 points will be subtracted for every minute the team speaks beyond 7 minutes. Additionally 2 points will be subtracted for every team member that does not present for the minimum requirement of 1 minute.
During the competition, each team’s model will be tested on unseen data to evaluate its real-world performance. The model will be scored out of 50 points, click here to view the judging rubric and scoring scale. While teams have based their work on a specific dataset, we will provide a new dataset that they have not encountered before to assess how well their model generalizes. The unseen data will be designed based on each team’s final report submitted during the qualifiers. Depending on the nature of their project, we may use open-source data or generate synthetic data to ensure it is relevant to their specific task. The audience and judges will observe the live testing to see the model’s capabilities in action. In their reports and presentations, teams will have already defined their evaluation metrics, such as accuracy or F1 score, and explained what they are aiming to achieve. This final test will determine how well their model meets those expectations when faced with new data.
Rules and Regulations¶
We have established the following rules and regulations for the competition to ensure fair play and appropriate conduct among participants.
Code of Conduct¶
We expect all participants to act professionally and ethically, and to comply with the following code of conduct:
-
Respect the rights, dignity, and privacy of other participants and individuals involved in the competition.
-
Do not engage in any form of harassment, discrimination, or intimidation.
-
Do not engage in any illegal or unethical behavior.
-
Do not use the competition and its platforms to promote or advertise any products or services.
Disqualification¶
Participants may be disqualified from the competition for the following reasons:
-
Violation of the rules and regulations of the competition.
-
Providing false or misleading information.
-
Engaging in any unethical or illegal behavior.
-
Failure to comply with submission guidelines or deadlines.
-
Failing to follow any other instructions or requirements provided by the competition organizers.
In case of disqualification, the participant will forfeit any prizes or rewards they may have been eligible to receive.
Frequently Asked Questions¶
1. Are teams required to use existing or open-source data, or can they collect their own?
Teams are free to use either existing/open-source data or collect their own for their project. However, for the unseen data portion of the final competition at PARC 2025, we will provide open-source or synthetic data based on each team’s proposal.
2. Are there any restrictions on which machine learning models teams can use?
No, teams are free to use any machine learning model of their choice.
3. Does the code need to fully work, or is a prototype sufficient?
Yes, the code is expected to work. We will test each team’s model with unseen data to evaluate its effectiveness.
4. Do all participants compete at PARC 2025?
No, only the top 10 teams from the qualifiers round will advance to compete at PARC 2025.
5. How much is the registration fee, and what does it cover?
There is no fee to participate in the Qualifiers Round, but there is a fee for the Top 10 Finalist teams that compete in Senegal. The $250 USD fee is a one-time payment per team that covers lodging, daily meals (breakfast, lunch, and dinner), airport transportation, and travel between the dormitory and competition arena.
6. What additional costs should teams cover?
Teams must pay for their own passport and visa fees, travel to Senegal, and any optional items like custom team t-shirts or a national flag.
7. When is the payment due, and what happens if we miss the deadline?
The fee is due by June 9, 2025. Teams that do not pay by the deadline will lose their finalist spot in the competition.
8. Who owns the intellectual property (IP) of the project?
The teams retain full ownership of their IP. However, by participating, they agree to allow PARC to showcase their work without consequences. A signed consent form confirming this must be submitted with the concept one-pager.
9. Is there a community where one can ask questions other than through email?
Yes, here is the link to the AI League’s Discord Server.
10. What do the winners receive?
The first-place team will receive a prize of $2,000 USD.
11. How many people can be on a team, and what is the age requirement?
Teams must have 2 to 6 members, and all participants must be 18 years or older to compete.
12. Where to indicate the team’s choice of paper?
At this link.
13. Where are the recorded sessions’ videos located at?
At this link.