Automated Programming Competition Platform with Leaderboard
Automates student programming competitions with GitLab CI/CD and a transparent leaderboard on GitLab Pages to increase engagement and provide immediate feedback.
Overview
Value: Automates student programming competitions with GitLab CI/CD and a transparent leaderboard on GitLab Pages to increase engagement and provide immediate feedback.
Problem: Running fair, scalable competitions with consistent, reproducible evaluation is time‑consuming and error‑prone when done manually. Students lack immediate feedback and motivation, and instructors spend significant time on grading, coordination, and sharing results.
Solution: Each student submits to a private repository. CI/CD executes tests, scores solutions, and writes a benchmark file (JSON/CSV). Artifacts are retained and a static leaderboard is built and published via GitLab Pages, giving transparent rankings while protecting privacy and enforcing deadlines.
Who Benefits
Primary
-
Students (BSc/MSc)
- Immediate feedback on submissions
- Transparent ranking and progress over time
- Motivation through competition mechanics
- Practice with Git and CI workflows
Secondary
-
Lecturers, Supervisors, System Administrators
- Automated, reproducible grading at scale
- Central oversight of all repositories
- Audit trail via artifacts and pipelines
- Easier communication via issues and milestones
When to Use
- Courses or labs with programming assignments and objective tests
- Hackathons or practice competitions with 50–200 participants
- Tasks with deterministic, automated evaluation criteria
When Not to Use
- Assignments requiring subjective or manual evaluation
- Tasks without reliable automated tests or datasets
- Situations that require external collaboration during the competition
Process
- Prepare task template, tests, and scoring rules
- Provision private student repositories from the template
- Students implement solutions and push to GitLab
- CI/CD runs tests, generates benchmark JSON/CSV, uploads artifacts
- Pipeline updates a static leaderboard (GitLab Pages)
- Close the competition, publish final results, archive repositories
Requirements
People
- Project lead / Lecturer
- Teaching assistants / Supervisors
- System administrator (GitLab/Runner/Pages)
- Students
Data Inputs
- Test datasets and fixtures
- Evaluation scripts and scoring rules
- Course roster for repository provisioning
- Metadata for leaderboard (e.g., tie‑breakers)
Tools & Systems
- Git and GitLab
- GitLab CI/CD with Docker‑based Runner (per‑job isolation)
- Python and/or Java toolchains (task dependent)
- JSON/CSV parsers and leaderboard generation scripts
- GitLab Pages for static leaderboard
- Email or webhook notifications on submission and grading
Policies & Compliance
- Data protection and anonymization for published results
- Examination policies and deadline enforcement
- Repository retention and archival (1 year post‑competition)
- Protected branches to prevent late commits
Risks & Mitigations
-
Cheating or collusion between participants
- Private repositories per student
- Plagiarism checks and randomized tests
-
Unstable or flaky tests causing unfair results
- Deterministic test harness and seed control
- Clear documentation of scoring and environment
-
Runner resource exhaustion and long queues
- Per‑job timeouts and resource limits
- Autoscaling or dedicated runners for the course
-
Data privacy issues on the leaderboard
- Anonymize with pseudonymous IDs
- Publish only aggregated metrics
Getting Started
Prerequisites: GitLab group access, a Docker‑based GitLab Runner, and a GitLab Pages project or domain. CI/CD currently not set up (planning phase).
- Create a template repository with task description, tests, and .gitlab-ci.yml that outputs a benchmark JSON/CSV and stores it as an artifact
- Provision private repositories for each student from the template; grant instructor/TA access
- Configure a job or companion project to build and publish a static leaderboard on GitLab Pages from the latest artifact
- Announce deadlines via milestones; handle issues for questions and incidents
- After the competition, publish final results, collect feedback, and archive repositories for one year
Resources
FAQ
Can we support multiple languages (e.g., Python and Java)?
Yes. Use language‑specific Docker images and per‑task CI jobs to run tests in isolated environments.
How do we enforce deadlines and prevent late commits?
Use protected branches, tag the deadline in milestones, and configure pipelines to reject late submissions.
How can we anonymize the leaderboard?
Publish pseudonymous identifiers and avoid storing personally identifiable information in artifacts.
What if we need to integrate Moodle?
Optionally add webhooks or export grades from the benchmark to CSV for import into Moodle.
Glossary
- Benchmark file
- Structured JSON/CSV written by CI with per‑submission metrics used to compute rankings.
- Artifact
- Files produced by a CI job and retained by GitLab for download or further jobs.
- Leaderboard
- Static site on GitLab Pages showing current rankings and metrics.