• search hit 2 of 9
Back to Result List

Utilizing a pretrained language model (BERT) to classify preservice physics teachers' written reflections

  • Computer-based analysis of preservice teachers' written reflections could enable educational scholars to design personalized and scalable intervention measures to support reflective writing. Algorithms and technologies in the domain of research related to artificial intelligence have been found to be useful in many tasks related to reflective writing analytics such as classification of text segments. However, mostly shallow learning algorithms have been employed so far. This study explores to what extent deep learning approaches can improve classification performance for segments of written reflections. To do so, a pretrained language model (BERT) was utilized to classify segments of preservice physics teachers' written reflections according to elements in a reflection-supporting model. Since BERT has been found to advance performance in many tasks, it was hypothesized to enhance classification performance for written reflections as well. We also compared the performance of BERT with other deep learning architectures and examinedComputer-based analysis of preservice teachers' written reflections could enable educational scholars to design personalized and scalable intervention measures to support reflective writing. Algorithms and technologies in the domain of research related to artificial intelligence have been found to be useful in many tasks related to reflective writing analytics such as classification of text segments. However, mostly shallow learning algorithms have been employed so far. This study explores to what extent deep learning approaches can improve classification performance for segments of written reflections. To do so, a pretrained language model (BERT) was utilized to classify segments of preservice physics teachers' written reflections according to elements in a reflection-supporting model. Since BERT has been found to advance performance in many tasks, it was hypothesized to enhance classification performance for written reflections as well. We also compared the performance of BERT with other deep learning architectures and examined conditions for best performance. We found that BERT outperformed the other deep learning architectures and previously reported performances with shallow learning algorithms for classification of segments of reflective writing. BERT starts to outperform the other models when trained on about 20 to 30% of the training data. Furthermore, attribution analyses for inputs yielded insights into important features for BERT's classification decisions. Our study indicates that pretrained language models such as BERT can boost performance for language-related tasks in educational contexts such as classification.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Peter WulffORCiD, Lukas MientusORCiDGND, Anna NowakORCiDGND, Andreas BorowskiORCiDGND
DOI:https://doi.org/10.1007/s40593-022-00290-6
ISSN:1560-4292
ISSN:1560-4306
Title of parent work (English):International journal of artificial intelligence in education
Publisher:Springer
Place of publishing:New York
Publication type:Article
Language:English
Date of first publication:2022/05/02
Publication year:2022
Release date:2023/12/07
Tag:Deep learning; NLP; Reflective writing; Science education
Issue:33
Number of pages:28
First page:439
Last Page:466
Organizational units:Mathematisch-Naturwissenschaftliche Fakultät / Institut für Physik und Astronomie
DDC classification:5 Naturwissenschaften und Mathematik / 53 Physik / 530 Physik
Peer review:Referiert
Publishing method:Open Access / Hybrid Open-Access
License (German):License LogoCC-BY - Namensnennung 4.0 International
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.