A.2: Course Project

The course project is a team-based, design-oriented assignment in which students develop, prototype, and analyse an AI-assisted proposal addressing a concrete real-world problem. In contrast to the case studies, which focus on analysing existing systems or practices, the course project emphasises making, evaluation through use, and critical reflection through implementation.

Projects may take analytical, tool-based, or design-oriented forms, but all must demonstrate a considered synthesis of responsibility and ethics, mechanisms and model behaviour, and datasets and data practices, reflecting how these dimensions interact in real-world AI applications. The emphasis is on reasoned experimentation, explicit assumptions, and reflective judgement rather than technical novelty or performance optimisation.

The project is completed in teams of 4–6 students. Tutorials provide structured opportunities for skills development, feedback, and team formation.


Project Structure & Deliverables

The course project consists of the following assessed components:


Milestones

Item Description Date Individual (%) Group (%)
Project Team Formation Form Early indication of potential project partners to support team formation and initial scoping as a 100-200 word statement. Submit via Moodle. 2026.02.22
[Mid-term Pecha Kucha Presentation] Time-limited presentation of project framing, proposed AI-assisted approach, and preliminary considerations on data, mechanisms, and responsibility. 2026.03.18 10
Crit-Style Peer Review (Post-Midterm) Structured studio-style peer critique conducted during tutorial slots, focused on problem definition, methodological coherence, assumptions, and responsible AI positioning. Formative and non-graded. 2026.03.19-24
[Final Project Submission] Submission of all core deliverables. Projects are presented through the screening of the vignette component. 2026.04.22 35
Crit-Style Peer Review (Post-Final) Reflective peer critique of completed projects (via tutorial slots), emphasising evaluative judgement, strengths, limitations, and comparative learning. Formative and non-graded. 2026.04.23-28
Individual Reflection Individually assessed reflective statement on contributions, learning process, and key design decisions. 2026.05.06 10
Peer Assessment Advisory peer feedback on group process and individual contributions, used to inform moderation where appropriate. 2026.05.06

Learning Focus

Through the course project, students are expected to demonstrate the ability to:

The course project serves as the capstone assignment for the course, integrating technical understanding, critical judgement, and professional communication into a single, coherent body of work.


Possible Directions

Methodology

CCAI 9012 encourages students to conduct their Course Project using programming, enabling systematic data collection and analysis, and unlocking scalable opportunities that would be cumbersome—if not intractable—to execute and control via ChatBot/UI-based GenAI alone.

While programming is strongly encouraged, we recognize that skill levels vary across the student body and welcome projects in any format, as long as they are grounded in:

Accordingly, it is also acceptable to construct a dataset manually through direct interaction with LLMs, provided you take appropriate precautions to avoid unintentionally biasing your results.

For students interested in developing programming-based solutions, a number of starter kits [link] are available. These include example code that already performs canonical ML and AI tasks. Often, simple modifications—such as swapping in a different text or image dataset—are sufficient to support exploration of new research questions.

Note: The documentation for starter kits is being actively expanded and refined.

Thinking Algorithmically

Whether you are generating data manually or programmatically, it is helpful to break down your problem and methods into modular mappings, such as:

By intelligently chaining these modules, and then running the resulting pipeline over each data point in a dataset, you can build powerful systems to investigate varied and complex questions.

For example, to study whether different LLMs provide different responses under different circumstances, you might:

  1. Define a set of prompts: text
  2. Feed them into multiple LLMs: text → text
  3. Compare and score their outputs: text → scalar (e.g., with human ratings or another model)
  4. Aggregate results over the dataset: vectors of scalars → scalars (summary statistics)

Similarly, to study whether multimodal, image-generation–enabled LLMs exhibit professional, racial, or gender bias, you might:

  1. Design prompts describing roles or scenarios: text
  2. Generate corresponding images: text → image
  3. Analyse those images for attributes (e.g., perceived gender/race/occupation): image → text and/or image → scalar
  4. Quantify bias by aggregating these attributes: vectors (attribute scores across images) → scalars (bias metrics)

The key is to think in terms of these modular transformations and then compose them into end-to-end pipelines tailored to your research question.

Example Ideas

Listed below are example directions (analysing AI, using AI for analysis, developing AI‑powered tools) are illustrative rather than exhaustive. Note that many other directions are also possible.