TY - JOUR A1 - Brunner, Martin A1 - Keller, Ulrich A1 - Wenger, Marina A1 - Fischbach, Antoine A1 - Lüdtke, Oliver T1 - Between-School Variation in Students' Achievement, Motivation, Affect, and Learning Strategies BT - Results from 81 Countries for Planning Group-Randomized Trials in Education JF - Journal of research on educational effectiveness / Society for Research on Educational Effectiveness (SREE) N2 - To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these parameters for U.S. samples and for achievement as the outcome. This paper and the online supplementary materials provide design parameters for 81 countries in three broad outcome categories (achievement, affect and motivation, and learning strategies) for domain-general and domain-specific (mathematics, reading, and science) measures. Sociodemographic characteristics were used as covariates. Data from representative samples of 15-year-old students stemmed from five cycles of the Programme for International Student Assessment (PISA; total number of students/schools: 1,905,147/70,098). Between-school differences as well as the amount of variance explained at L1 and L2 varied widely across countries and educational outcomes, demonstrating the limited generalizability of design parameters across these dimensions. The use of the design parameters to plan group-randomized trials is illustrated. KW - student achievement KW - motivation KW - affect KW - learning styles KW - intraclass correlation KW - large-scale assessment KW - multilevel models KW - design parameters Y1 - 2017 U6 - https://doi.org/10.1080/19345747.2017.1375584 SN - 1934-5747 SN - 1934-5739 VL - 11 IS - 3 SP - 452 EP - 478 PB - Routledge, Taylor & Francis Group CY - Abingdon ER - TY - GEN A1 - Brunner, Martin A1 - Keller, Ulrich A1 - Wenger, Marina A1 - Fischbach, Antoine A1 - Lüdtke, Oliver T1 - Between-school variation in students' achievement, motivation, affect, and learning strategies BT - results from 81 countries for planning group-randomized trials in education T2 - Postprints der Universität Potsdam : Humanwissenschaftliche Reihe N2 - To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these parameters for U.S. samples and for achievement as the outcome. This paper and the online supplementary materials provide design parameters for 81 countries in three broad outcome categories (achievement, affect and motivation, and learning strategies) for domain-general and domain-specific (mathematics, reading, and science) measures. Sociodemographic characteristics were used as covariates. Data from representative samples of 15-year-old students stemmed from five cycles of the Programme for International Student Assessment (PISA; total number of students/schools: 1,905,147/70,098). Between-school differences as well as the amount of variance explained at L1 and L2 varied widely across countries and educational outcomes, demonstrating the limited generalizability of design parameters across these dimensions. The use of the design parameters to plan group-randomized trials is illustrated. T3 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe - 465 KW - student achievement KW - motivation KW - affect KW - learning styles KW - intraclass correlation KW - large-scale assessment KW - multilevel models KW - design parameters Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-412662 IS - 465 ER - TY - JOUR A1 - Levy, Jessica A1 - Mussack, Dominic A1 - Brunner, Martin A1 - Keller, Ulrich A1 - Cardoso-Leite, Pedro A1 - Fischbach, Antoine T1 - Contrasting classical and machine learning approaches in the estimation of value-added scores in large-scale educational data JF - Frontiers in psychology N2 - There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed. KW - value-added modeling KW - school effectiveness KW - machine learning KW - model KW - comparison KW - longitudinal data Y1 - 2020 U6 - https://doi.org/10.3389/fpsyg.2020.02190 SN - 1664-1078 VL - 11 PB - Frontiers Research Foundation CY - Lausanne ER - TY - JOUR A1 - Levy, Jessica A1 - Brunner, Martin A1 - Keller, Ulrich A1 - Fischbach, Antoine T1 - Methodological issues in value-added modeling: an international review from 26 countries JF - Educational Assessment, Evaluation and Accountability N2 - Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in educational evaluation, teacher accountability, and high-stakes decisions. We analyzed 370 empirical studies on VA modeling, focusing on modeling and methodological issues to identify key factors for improvement. The studies stemmed from 26 countries (68% from the USA). Most studies applied linear regression or multilevel models. Most studies (i.e., 85%) included prior achievement as a covariate, but only 2% included noncognitive predictors of achievement (e.g., personality or affective student variables). Fifty-five percent of the studies did not apply statistical adjustments (e.g., shrinkage) to increase precision in effectiveness estimates, and 88% included no model diagnostics. We conclude that research on VA modeling can be significantly enhanced regarding the inclusion of covariates, model adjustment and diagnostics, and the clarity and transparency of reporting. What is the added value from attending a certain school or being taught by a certain teacher? To answer this question, the value-added (VA) model was developed. In this model, the actual achievement attained by students attending a certain school or being taught by a certain teacher is juxtaposed with the achievement that is expected for students with the same background characteristics (e.g., pretest scores). To this end, the VA model can be used to compute a VA score for each school or teacher, respectively. If actual achievement is better than expected achievement, there is a positive effect (i.e., a positive VA score) of attending a certain school or being taught by a certain teacher. In other words, VA models have been developed to “make fair comparisons of the academic progress of pupils in different settings” (Tymms 1999, p. 27). Their aim is to operationalize teacher or school effectiveness objectively. Specifically, VA models are often used for accountability purposes and high-stakes decisions (e.g., to allocate financial or personal resources to schools or even to decide which teachers should be promoted or discharged). Consequently, VA modeling is a highly political topic, especially in the USA, where many states have implemented VA or VA-based models for teacher evaluation (Amrein-Beardsley and Holloway 2017; Kurtz 2018). However, this use for high-stakes decisions is highly controversial and researchers seem to disagree concerning the question if VA scores should be used for decision-making (Goldhaber 2015). For a more exhaustive discussion of the use of VA models for accountability reasons, see, for example, Scherrer (2011). Given the far-reaching impact of VA scores, it is surprising that there is scarcity of systematic reviews of how VA scores are computed, evaluated, and how this research is reported. To this end, we review 370 empirical studies from 26 countries to rigorously examine several key issues in VA modeling, involving (a) the statistical model (e.g., linear regression, multilevel model) that is used, (b) model diagnostics and reported statistical parameters that are used to evaluate the quality of the VA model, (c) the statistical adjustments that are made to overcome methodological challenges (e.g., measurement error of the outcome variables), and (d) the covariates (e.g., pretest scores, students’ sociodemographic background) that are used when estimating expected achievement. All this information is critical for meeting the transparency standards defined by the American Educational Research Association (AERA 2006). Transparency is vital for educational research in general and especially for highly consequential research, such as VA modeling. First, transparency is highly relevant for researchers. The clearer the description of the model, the easier it is to build upon the knowledge of previous research and to safeguard the potential for replicating previous results. Second, because decisions that are based on VA scores affect teachers’ lives and schools’ futures, not only educational agents but also the general public should be able to comprehend how these scores are calculated to allow for public scrutiny. Specifically, given that VA scores can have devastating consequences on teachers’ lives and on the students they teach, transparency is particularly important to evaluate the chosen methodology to compute VA models for a certain purpose. Such evaluations are essential to answer the question to what extent the quality of VA scores allows to base far-reaching decisions on these scores for accountability purposes. KW - Value-added modeling KW - Literature review KW - Primary and secondary education KW - Teacher effectiveness KW - School effectiveness Y1 - 2019 U6 - https://doi.org/10.1007/s11092-019-09303-w SN - 1874-8597 SN - 1874-8600 VL - 31 IS - 3 SP - 257 EP - 287 PB - Springer CY - Heidelberg ER -