Refine
Has Fulltext
- no (19)
Document Type
- Article (19) (remove)
Is part of the Bibliography
- yes (19)
Keywords
- intelligence (3)
- Mixed-age learning (2)
- ability differentiation (2)
- academic self-concept (2)
- age differentiation (2)
- childhood (2)
- Aptitude (1)
- Attribution theory (1)
- Bayesian reasoning (1)
- Domain differences (1)
Institute
Für die Analyse der Unterrichtsqualität von Schulen durch Schülerurteile sollten drei Voraussetzungen erfüllt sein: (1) eine angemessene Übereinstimmung der Schülerurteile innerhalb der Schulen, (2) systematische Variabilität der Schülerurteile zwischen Schulen, (3) ein ausreichendes Maß an Reliabilität der aggregierten Urteile. Diese Studie untersucht mit internationalen PISA-Daten (Zyklen 2000–2012; 81 Länder, über 55.300 Schulen, über 1,3 Millionen 15-Jährige), inwiefern dies für Indikatoren der Qualitätsdimensionen des Unterrichts (Klassenführung, Kognitive Aktivierung, Konstruktive Unterstützung) zutrifft. Dafür bestimmten wir das Übereinstimmungsmaß rWG(J) sowie die Intraklassenkorrelationen ICC(1) und ICC(2). Es zeigte sich, dass (1) die Mehrzahl der Unterrichtsmerkmale eine moderate oder starke Übereinstimmung in Schulen aufwies, (2) sich Unterrichtsmerkmale aus Sicht der Schülerschaft systematisch zwischen Schulen unterschieden, jedoch (3) die Reliabilität der aggregierten Schülerurteile in vielen Ländern nicht ausreichte. Die Ergebnisse diskutieren wir vor dem Hintergrund von Konventionen zur Beurteilung der Übereinstimmung, Variabilität und Reliabilität auf Schulebene.
The aim of educational policy should be to provide a good education to all students. Thus, a key question arises regarding the extent to which key characteristics of school composition (proportion of students with migration background, socioeconomic status [SES], prior school achievement, and achievement heterogeneity), instructional quality, school quality, and later school achievement are interrelated. The present study addressed this research question by examining school inspection data, official school statistics, and large-scale achievement data from all primary schools in Berlin, Germany (N = 343). The results of correlation and path analyses showed that school composition (average SES, average prior school achievement) predicted components of instructional quality (SES: classroom management, cognitive activation; achievement: cognitive activation, individual learning support). The relation between school composition characteristics and most components of school quality was close to zero. Contrary to our expectations, only the effect of school SES on later achievement was mediated by instructional quality.
Im Schuljahr 2008/09 war Jahrgangsübergreifendes Lernen (JÜL) in der Berliner Schuleingangsphase verpflichtend eingeführt worden. Doch nicht alle Schulen übernahmen diese Reform. In dieser Studie untersuchen wir, inwiefern Schulen sich in Abhängigkeit davon, wie schnell und umfassend sie JÜL implementiert hatten, in Merkmalen ihrer Schülerschaft voneinander unterscheiden. Wir nahmen an, dass mit dem Ziel von JÜL, Heterogenität produktiv für das Lernen zu nutzen, die Reform für solche Schulen besonders attraktiv war, die eine heterogene Schülerschaft haben. Heterogenität wurde über die Anteile von Kindern mit (a) nichtdeutscher Erstsprache und (b) Lernmittelzuzahlungsbefreiung operationalisiert. Weiter wurde untersucht, ob sich die Leistungen der Kinder in Deutsch und Mathematik zwischen den Schulen unterschieden. Die Ergebnisse zeigen erwartungsgemäß, dass Schulen mit einer heterogenen Schülerschaft JÜL schnell und nachhaltig implementierten. Im zeitlichen Verlauf ließen sich, nach Kontrolle der Heterogenität der Schülerschaft, keine Leistungsunterschiede zwischen den Schulen feststellen. Die Ergebnisse werden hinsichtlich der Frage diskutiert, unter welchen Voraussetzungen Schulen Reformen implementieren und wie sich JÜL auf Bildungsergebnisse auswirken kann.
Bildungspolitische Reformen unterscheiden sich in der Breite, Tiefe und Nachhaltigkeit, mit der sie realisiert werden. Der vorliegende Beitrag beschäftigt sich mit diesem Thema am Beispiel der Umsetzung des Jahrgangsübergreifenden Lernens (JÜL) in Berlin. JÜL war eine der zentralen Innovationen bei der Neugestaltung des Schulanfangs. Vor diesem Hintergrund behandelt die erste Teilstudie, wie JÜL an Schulen in den Schuljahren 2007/08 bis 2015/16 implementiert wurde. Es wurden Daten der Berliner Schulstatistik zu einem Längsschnitt auf Schulebene zusammengefasst (N = 356). Latente Profilanalysen identifizieren sechs Implementationstypen, die sich in Zeitpunkt und Dauer der Umsetzung von JÜL unterscheiden. Hierbei diente der Anteil der JÜL-Klassen an den Klassen der Schulanfangsphase als Indikator. Die zweite Teilstudie analysiert Unterschiede in der Schul- und Unterrichtsqualität auf Grundlage von Daten aus der Berliner Schulinspektion (N = 282). Mittels Varianzanalysen (ANOVA) zeigen sich a) Unterschiede zugunsten der Schulen, die frühzeitig und dauerhaft JÜL umsetzten und b) Unterschiede zugunsten der Schulen, die in ihren JÜL-Klassen drei – im Vergleich zu zwei – Jahrgänge zusammenfassen.
Effects of achievement differences for internal/external frame of reference model investigations
(2018)
Background
Achievement in math and achievement in verbal school subjects are more strongly correlated than the respective academic self-concepts. The internal/external frame of reference model (I/E model; Marsh, 1986, Am. Educ. Res. J., 23, 129) explains this finding by social and dimensional comparison processes. We investigated a key assumption of the model that dimensional comparisons mainly depend on the difference in achievement between subjects. We compared correlations between subject-specific self-concepts of groups of elementary and secondary school students with or without achievement differences in the respective subjects.
Aims
The main goals were (1) to show that effects of dimensional comparisons depend to a large degree on the existence of achievement differences between subjects, (2) to demonstrate the generalizability of findings over different grade levels and self-concept scales, and (3) to test a rarely used correlation comparison approach (CCA) for the investigation of I/E model assumptions.
Samples
We analysed eight German elementary and secondary school student samples (grades 3–8) from three independent studies (Ns 326–878).
Method
Correlations between math and German self-concepts of students with identical grades in the respective subjects were compared with the correlation of self-concepts of students having different grades using Fisher's Z test for independent samples.
Results
In all samples, correlations between math self-concept and German self-concept were higher for students having identical grades than for students having different grades. Differences in median correlations had small effect sizes for elementary school students and moderate effect sizes for secondary school students.
Conclusions
Findings generalized over grades and indicated a developmental aspect in self-concept formation. The CCA complements investigations within I/E-research.
We assessed teacher educators? task perception and investigated its relationship with components of their professional identity and their teaching practice. Using data from 145 teacher educators, two different task perceptions were found: transmitters and facilitators. Teacher educators who were categorized as facilitator tend to demonstrate higher levels of self-efficacy, job satisfaction, constructivist beliefs about teaching and learning and use more effective teaching strategies. The findings demonstrate that teaching practices of teacher educators are rooted in their professional identity. ? 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed.
Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in educational evaluation, teacher accountability, and high-stakes decisions. We analyzed 370 empirical studies on VA modeling, focusing on modeling and methodological issues to identify key factors for improvement. The studies stemmed from 26 countries (68% from the USA). Most studies applied linear regression or multilevel models. Most studies (i.e., 85%) included prior achievement as a covariate, but only 2% included noncognitive predictors of achievement (e.g., personality or affective student variables). Fifty-five percent of the studies did not apply statistical adjustments (e.g., shrinkage) to increase precision in effectiveness estimates, and 88% included no model diagnostics. We conclude that research on VA modeling can be significantly enhanced regarding the inclusion of covariates, model adjustment and diagnostics, and the clarity and transparency of reporting.
What is the added value from attending a certain school or being taught by a certain teacher? To answer this question, the value-added (VA) model was developed. In this model, the actual achievement attained by students attending a certain school or being taught by a certain teacher is juxtaposed with the achievement that is expected for students with the same background characteristics (e.g., pretest scores). To this end, the VA model can be used to compute a VA score for each school or teacher, respectively. If actual achievement is better than expected achievement, there is a positive effect (i.e., a positive VA score) of attending a certain school or being taught by a certain teacher. In other words, VA models have been developed to “make fair comparisons of the academic progress of pupils in different settings” (Tymms 1999, p. 27). Their aim is to operationalize teacher or school effectiveness objectively. Specifically, VA models are often used for accountability purposes and high-stakes decisions (e.g., to allocate financial or personal resources to schools or even to decide which teachers should be promoted or discharged). Consequently, VA modeling is a highly political topic, especially in the USA, where many states have implemented VA or VA-based models for teacher evaluation (Amrein-Beardsley and Holloway 2017; Kurtz 2018). However, this use for high-stakes decisions is highly controversial and researchers seem to disagree concerning the question if VA scores should be used for decision-making (Goldhaber 2015). For a more exhaustive discussion of the use of VA models for accountability reasons, see, for example, Scherrer (2011).
Given the far-reaching impact of VA scores, it is surprising that there is scarcity of systematic reviews of how VA scores are computed, evaluated, and how this research is reported. To this end, we review 370 empirical studies from 26 countries to rigorously examine several key issues in VA modeling, involving (a) the statistical model (e.g., linear regression, multilevel model) that is used, (b) model diagnostics and reported statistical parameters that are used to evaluate the quality of the VA model, (c) the statistical adjustments that are made to overcome methodological challenges (e.g., measurement error of the outcome variables), and (d) the covariates (e.g., pretest scores, students’ sociodemographic background) that are used when estimating expected achievement.
All this information is critical for meeting the transparency standards defined by the American Educational Research Association (AERA 2006). Transparency is vital for educational research in general and especially for highly consequential research, such as VA modeling. First, transparency is highly relevant for researchers. The clearer the description of the model, the easier it is to build upon the knowledge of previous research and to safeguard the potential for replicating previous results. Second, because decisions that are based on VA scores affect teachers’ lives and schools’ futures, not only educational agents but also the general public should be able to comprehend how these scores are calculated to allow for public scrutiny. Specifically, given that VA scores can have devastating consequences on teachers’ lives and on the students they teach, transparency is particularly important to evaluate the chosen methodology to compute VA models for a certain purpose. Such evaluations are essential to answer the question to what extent the quality of VA scores allows to base far-reaching decisions on these scores for accountability purposes.
It is well-documented that academic achievement is associated with students' self-perceptions of their academic abilities, that is, their academic self-concepts. However, low-achieving students may apply self-protective strategies to maintain a favorable academic self-concept when evaluating their academic abilities. Consequently, the relation between achievement and academic self-concept might not be linear across the entire achievement continuum. Capitalizing on representative data from three large-scale assessments (i.e., TIMSS, PIRLS, PISA; N = 470,804), we conducted an integrative data analysis to address nonlinear trends in the relations between achievement and the corresponding self-concepts in mathematics and the verbal domain across 13 countries and 2 age groups (i.e., elementary and secondary school students). Polynomial and interrupted regression analyses showed nonlinear relations in secondary school students, demonstrating that the relations between achievement and the corresponding self-concepts were weaker for lower achieving students than for higher achieving students. Nonlinear effects were also present in younger students, but the pattern of results was rather heterogeneous. We discuss implications for theory as well as for the assessment and interpretation of self-concept.