911³Ô¹Ï

Investigating Motivations and Perceptions of Undergraduate Students Using AI for Assignments

TILT Program: TILT SoTL Project

Principal investigator: Michael Filimowicz, university lecturer, Faculty of Communication, Art and Technology, School of Interactive Arts and Technology

Project team: Priscilla Y. Lo, research assistant; Shaghayegh Bahrami, TILT research assistant

Timeframe: May 2024 to February 2025

TILT Support: $5000 and up to 30 hours of TILT research assistant hours.

Course addressed: IAT210 - Introduction to Game Studies

Final report: View Michael Filimowicz's final report (PDF)

Description: This project investigated undergraduate students' motivations and perceptions regarding AI use in academic assignments within an interdisciplinary online course that explicitly permits AI usage with proper guidelines. Recognizing the increasing prevalence of AI tools in education and the gap in research understanding why students choose to use or avoid AI, the study examined two major assignments: a collaborative board game design project and an individual research essay about video games.

The research used mixed methods including analysis of student assignments and reflections, surveys, semi-structured interviews with eight students, and a focus group with teaching assistants. Seven distinct uses of AI emerged: generating intended content, shortcuts, advanced search engine functionality, assistance with tasks, brainstorming, improvement/feedback, and language support. Analysis revealed 42 non-exclusive motivations organized into seven thematic categories: productivity, disengagement, trust/quality of AI, peer influence, student support needs, learning/creativity considerations, and consequences of AI use.

Key findings showed that while students most commonly used "AI to generate intended content" and "AI as shortcuts", these purposes generated more negative sentiments than positive ones. Conversely, students viewed "AI as an assistant", "AI for brainstorming", and "AI for language support" most positively. Teaching assistants generally held more negative or neutral perspectives on student AI use, particularly noting issues when students used AI without critical evaluation, leading to nonsensical or hallucinated content in submissions. The research revealed significant tensions around AI use in collaborative work, with students expressing concerns about fairness and effort equity when group members relied heavily on AI-generated content. Despite acknowledging AI's unreliability and potential negative impacts on learning, many students reasoned that AI remained more helpful than not using it, often due to productivity pressures and perceived competitive disadvantages.

Questions addressed:

  • What are the purposes and motivations for students deciding to use AI for group projects and individual essay assignments?
  • What are students' perceptions and feelings about using AI for academic assignments?
  • What are educators' perceptions and feelings about AI usage for these assignments?
  • How do students navigate tensions between AI use and learning authenticity?

Knowledge sharing: Findings were shared with School faculty via email. The research team plans to develop publications if time permits, contributing to the growing body of literature on AI in higher education and informing evidence-based policies for AI use in academic settings. The project’s findings were also presented at TILT’s SoTL Thoughts, AI in the Classroom: Evidence-based Insights from 911³Ô¹Ï Colleagues

Keywords: artificial intelligence in education, student motivations, academic integrity, AI literacy, undergraduate assignments, educational technology ethics, collaborative learning, language support, critical thinking, AI policy development