TY - JOUR A1 - Wulff, Peter A1 - Buschhüter, David A1 - Westphal, Andrea A1 - Nowak, Anna A1 - Becker, Lisa A1 - Robalino, Hugo A1 - Stede, Manfred A1 - Borowski, Andreas T1 - Computer-based classification of preservice physics teachers’ written reflections JF - Journal of science education and technology N2 - Reflecting in written form on one's teaching enactments has been considered a facilitator for teachers' professional growth in university-based preservice teacher education. Writing a structured reflection can be facilitated through external feedback. However, researchers noted that feedback in preservice teacher education often relies on holistic, rather than more content-based, analytic feedback because educators oftentimes lack resources (e.g., time) to provide more analytic feedback. To overcome this impediment to feedback for written reflection, advances in computer technology can be of use. Hence, this study sought to utilize techniques of natural language processing and machine learning to train a computer-based classifier that classifies preservice physics teachers' written reflections on their teaching enactments in a German university teacher education program. To do so, a reflection model was adapted to physics education. It was then tested to what extent the computer-based classifier could accurately classify the elements of the reflection model in segments of preservice physics teachers' written reflections. Multinomial logistic regression using word count as a predictor was found to yield acceptable average human-computer agreement (F1-score on held-out test dataset of 0.56) so that it might fuel further development towards an automated feedback tool that supplements existing holistic feedback for written reflections with data-based, analytic feedback. KW - reflection KW - teacher professional development KW - hatural language KW - processing KW - machine learning Y1 - 2020 U6 - https://doi.org/10.1007/s10956-020-09865-1 SN - 1059-0145 SN - 1573-1839 VL - 30 IS - 1 SP - 1 EP - 15 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Levy, Jessica A1 - Mussack, Dominic A1 - Brunner, Martin A1 - Keller, Ulrich A1 - Cardoso-Leite, Pedro A1 - Fischbach, Antoine T1 - Contrasting classical and machine learning approaches in the estimation of value-added scores in large-scale educational data JF - Frontiers in psychology N2 - There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed. KW - value-added modeling KW - school effectiveness KW - machine learning KW - model KW - comparison KW - longitudinal data Y1 - 2020 U6 - https://doi.org/10.3389/fpsyg.2020.02190 SN - 1664-1078 VL - 11 PB - Frontiers Research Foundation CY - Lausanne ER -