Publication:
AP® STEM Student Assessment of ChatGPT Prompt Responses

No Thumbnail Available

Date

2025-01-28

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Friske, Zachary Michael. 2025. AP® STEM Student Assessment of ChatGPT Prompt Responses. Master's thesis, Harvard University Division of Continuing Education.

Research Data

Abstract

This thesis explores the current accuracy of generative AI ChatGPT responses to Advanced Placement® (AP®) free response and multiple-choice questions found in high school courses covering AP® Physics 1 and AP® Calculus AB. Moreover, this thesis compares how high school students evaluate ChatGPT responses to algebra, precalculus, and calculus concepts found within these high school courses at participating high schools in the DFW area of Texas. In general, studies on generative AI use within STEM high school classrooms across the United States is limited. Current literature suggests that generative AI programs like ChatGPT have the potential to supplement classroom instruction by providing personalized assistance and immediate access to subject-specific information; however, there is a noticeable gap in understanding how high school AP® students perceive and interact with AI-generated responses, particularly in relation to accuracy and effectiveness as a study aid. Based on a literature review undertaken as a part of this thesis, a mixed-methods study was developed focusing on the perceptions of high school students interacting with ChatGPT written responses to AP®-style free response questions. The results of this thesis study show that while ChatGPT can offer detailed explanations and improve student understanding of complex topics, its generated responses can lack mathematical accuracy. Students generally viewed ChatGPT as a useful educational resource but often struggled to distinguish between AI-generated responses and official solutions provided by College Board®. Lower-performing students were more likely to overestimate the accuracy and completeness of ChatGPT’s outputs, potentially due to limited subject matter understanding. This study highlights the importance of developing high school students’ critical evaluation skills and suggests that integrating educational AI like ChatGPT into the classroom requires careful consideration of generative AI limitations and potential impact on learning outcomes.

Description

Other Available Sources

Keywords

AP Calculus, AP Physics, ChatGPT, High School STEM, Mixed-Methods Research, Student Perceptions, Education, Mathematics education, Physics

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories