AI Technology: Threats and Opportunities for Assessment Integrity in Introductory Programming

Authors

  • Guttorm Sindre NTNU, Norwegian University of Science and Technology, Norway

Keywords:

AI chatbots, assessment integrity, cheating, programming

Abstract

Recent AI tools like ChatGPT have prompted worries that assess- ment integrity in education will be increasingly threatened. From the perspective of introductory programming courses, this paper poses two research questions: 1) How well does ChatGPT perform on various assessment tasks typical of a CS1 course? 2) How does this technology change the threat profile for various types of assessments? Question 1 is analyzed by trying out ChatGPT on a range of typical assessment tasks, including code writing, code comprehension and explanation, error correction, and code completion (e.g., Parson’s problems, fill-in tasks, inline choice). Question 2 is addressed through a threat analysis of various assessment types, considering what AI chatbots would be adding relative to pre-existing assessment threats. Findings indicate that for simple questions, answers tend to be perfect and ready-to-use, though might need some rephrasing work from the student if the task partly consists of images. For more difficult questions, solutions might not be perfect on the first try, but the student could be able to get a more precise answer via follow-up questions. The threat analysis indicates that chatbots might not introduce any entirely new threats, rather they aggravate existing threats. The paper concludes with some thoughts on the future of assessment, reflecting that practitioners will likely use bots in the workplace, meaning that students must also be prepared for this.

Downloads

Download data is not yet available.

Downloads

Published

2023-11-28

How to Cite

[1]
G. Sindre, “AI Technology: Threats and Opportunities for Assessment Integrity in Introductory Programming ”, NIKT, no. 4, Nov. 2023.