Revisiting automated essay scoring via the gpt artificial intelligence chatbot: A mixed methods study
| dc.contributor.author | Fianu E. | |
| dc.contributor.author | Boateng S. | |
| dc.contributor.author | Arku Z. | |
| dc.date.accessioned | 2025-03-06T18:11:43Z | |
| dc.date.accessioned | 2025-03-06T21:52:10Z | |
| dc.date.issued | 2024 | |
| dc.description.abstract | The study sought to statistically compare separate sets of scores of graded essays generated from an automated essay scoring (AES) system (ChatGPT) and a human grader, and further engage stakeholders (students, lecturers, and university management) in a discussion of the results of the analysis from the perspective of fairness, bias, consistency with human grading, ethical issues, and adoption. The study adopted a sequential explanatory mixed methods design. The quantitative approach involved the collection and analysis of essay scores while the qualitative approach involved the use of interviews to ascertain stakeholder opinions of the quantitative results. The results of the quantitative study showed that the distribution of ChatGPT scores is the same across categories of age, gender, and ethnicity. Also, there was no statistically significant difference between ChatGPT scores and the scores of the human grader. The analysis of the responses from the interviews are thoroughly discussed. � 2024, IGI Global. | |
| dc.identifier.doi | 10.4018/979-8-3693-1310-7.ch008 | |
| dc.identifier.isbn | 979-836931311-4; 979-836931310-7 | |
| dc.identifier.uri | http://162.250.124.58:4000/handle/123456789/489 | |
| dc.publisher | IGI Global | |
| dc.source | Reshaping Learning with Next Generation Educational Technologies | |
| dc.title | Revisiting automated essay scoring via the gpt artificial intelligence chatbot: A mixed methods study | |
| dc.type | Book chapter |
