Refine
Is part of the Bibliography
- yes (21)
Keywords
- intelligence (3)
- Mixed-age learning (2)
- ability differentiation (2)
- academic self-concept (2)
- affect (2)
- age differentiation (2)
- childhood (2)
- design parameters (2)
- intraclass correlation (2)
- large-scale assessment (2)
- learning styles (2)
- motivation (2)
- multilevel models (2)
- student achievement (2)
- Aptitude (1)
- Assessment (1)
- Attribution theory (1)
- Bayesian reasoning (1)
- Domain differences (1)
- Dynamic Structural Equation Modeling (DSEM) (1)
- ESEM (1)
- Educational reform (1)
- Generalizability (1)
- Helplessness (1)
- Implementation success (1)
- Implementation von Schulreformen (1)
- Instruction (1)
- Instructional quality (1)
- Jahrgangsübergreifendes Lernen (1)
- Linda problem (1)
- Literature review (1)
- Longitudinal analyses (1)
- Monty Hall (1)
- PISA (1)
- Primary and secondary education (1)
- Professional development (1)
- Professional identity (1)
- Programme for International Student (1)
- Reformbereitschaft (1)
- SLODR (1)
- School effectiveness (1)
- Situation (1)
- Stability (1)
- Student perception (1)
- Student ratings (1)
- Teacher beliefs (1)
- Teacher education (1)
- Teacher educator (1)
- Teacher effectiveness (1)
- Teacher learning (1)
- Value-added modeling (1)
- Wason task (1)
- Zusammensetzung Schülerschaft hinsichtlich Erstsprache (1)
- Zusammensetzung Schülerschaft hinsichtlich Lernmittelzuzahlungsbefreiung (1)
- academic achievement (1)
- adolescence (1)
- autoregressive wage growth (1)
- cognitive illusion (1)
- cohort differences (1)
- comparison (1)
- complex survey designs (1)
- cumulative advantage (CA) (1)
- dimensional comparisons (1)
- educational large-scale assessments (1)
- elementary school students (1)
- ethnic student composition (1)
- factor analysis (1)
- frame of reference (1)
- grade point average (1)
- hospital problem (1)
- human capital theory (1)
- implementation of school reform (1)
- individual (1)
- instructional quality (1)
- internal/external frame-of-reference model (1)
- late (1)
- life span research (1)
- lifespan (1)
- logical thinking (1)
- longitudinal data (1)
- machine learning (1)
- mathematics (1)
- measurement invariance (1)
- meta-analysis (1)
- model (1)
- nonlinear (1)
- nonlinear relations (1)
- openness to reform (1)
- participant data (1)
- personality traits (1)
- problem (1)
- reading (1)
- school composition (1)
- school effectiveness (1)
- school quality (1)
- socioeconomic status (1)
- socioeconomic student composition (1)
- statistical reasoning (1)
- value-added modeling (1)
- wage dynamics (1)
Institute
Für die Analyse der Unterrichtsqualität von Schulen durch Schülerurteile sollten drei Voraussetzungen erfüllt sein: (1) eine angemessene Übereinstimmung der Schülerurteile innerhalb der Schulen, (2) systematische Variabilität der Schülerurteile zwischen Schulen, (3) ein ausreichendes Maß an Reliabilität der aggregierten Urteile. Diese Studie untersucht mit internationalen PISA-Daten (Zyklen 2000–2012; 81 Länder, über 55.300 Schulen, über 1,3 Millionen 15-Jährige), inwiefern dies für Indikatoren der Qualitätsdimensionen des Unterrichts (Klassenführung, Kognitive Aktivierung, Konstruktive Unterstützung) zutrifft. Dafür bestimmten wir das Übereinstimmungsmaß rWG(J) sowie die Intraklassenkorrelationen ICC(1) und ICC(2). Es zeigte sich, dass (1) die Mehrzahl der Unterrichtsmerkmale eine moderate oder starke Übereinstimmung in Schulen aufwies, (2) sich Unterrichtsmerkmale aus Sicht der Schülerschaft systematisch zwischen Schulen unterschieden, jedoch (3) die Reliabilität der aggregierten Schülerurteile in vielen Ländern nicht ausreichte. Die Ergebnisse diskutieren wir vor dem Hintergrund von Konventionen zur Beurteilung der Übereinstimmung, Variabilität und Reliabilität auf Schulebene.
In the present paper we empirically investigate the psychometric properties of some of the most famous statistical and logical cognitive illusions from the "heuristics and biases" research program by Daniel Kahneman and Amos Tversky, who nearly 50 years ago introduced fascinating brain teasers such as the famous Linda problem, the Wason card selection task, and so-called Bayesian reasoning problems (e.g., the mammography task). In the meantime, a great number of articles has been published that empirically examine single cognitive illusions, theoretically explaining people's faulty thinking, or proposing and experimentally implementing measures to foster insight and to make these problems accessible to the human mind. Yet these problems have thus far usually been empirically analyzed on an individual-item level only (e.g., by experimentally comparing participants' performance on various versions of one of these problems). In this paper, by contrast, we examine these illusions as a group and look at the ability to solve them as a psychological construct. Based on an sample of N = 2,643 Luxembourgian school students of age 16-18 we investigate the internal psychometric structure of these illusions (i.e., Are they substantially correlated? Do they form a reflexive or a formative construct?), their connection to related constructs (e.g., Are they distinguishable from intelligence or mathematical competence in a confirmatory factor analysis?), and the question of which of a person's abilities can predict the correct solution of these brain teasers (by means of a regression analysis).
The aim of educational policy should be to provide a good education to all students. Thus, a key question arises regarding the extent to which key characteristics of school composition (proportion of students with migration background, socioeconomic status [SES], prior school achievement, and achievement heterogeneity), instructional quality, school quality, and later school achievement are interrelated. The present study addressed this research question by examining school inspection data, official school statistics, and large-scale achievement data from all primary schools in Berlin, Germany (N = 343). The results of correlation and path analyses showed that school composition (average SES, average prior school achievement) predicted components of instructional quality (SES: classroom management, cognitive activation; achievement: cognitive activation, individual learning support). The relation between school composition characteristics and most components of school quality was close to zero. Contrary to our expectations, only the effect of school SES on later achievement was mediated by instructional quality.
Low-achieving students are at risk of experiencing a pattern of emotional, motivational, and cognitive deficits called school-related helplessness if they attribute their low achievement to low aptitude. Teachers' beliefs about the causes of students' low achievement are important sources of attributional information for students. In a sample of 2117 German ninth-graders attending the lowest track, 118 math and 129 German-language teachers, we tested whether teachers' beliefs about the extent to which aptitude causes achievement moderated the achievement-helplessness relation in students and whether there were differences between math and German. Multilevel analyses revealed that low prior achievement predicted higher helplessness in both subjects but the effect was stronger in math than in German. Teachers' beliefs amplified the achievement-helplessness relation in math but not in German. Results are discussed regarding domain-specific epistemological beliefs, and implications for research and practice are derived.
We assessed teacher educators? task perception and investigated its relationship with components of their professional identity and their teaching practice. Using data from 145 teacher educators, two different task perceptions were found: transmitters and facilitators. Teacher educators who were categorized as facilitator tend to demonstrate higher levels of self-efficacy, job satisfaction, constructivist beliefs about teaching and learning and use more effective teaching strategies. The findings demonstrate that teaching practices of teacher educators are rooted in their professional identity. ? 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Personality is a relevant predictor for important life outcomes across the entire lifespan. Although previous studies have suggested the comparability of the measurement of the Big Five personality traits across adulthood, the generalizability to childhood is largely unknown. The present study investigated the structure of the Big Five personality traits assessed with the Big Five Inventory-SOEP Version (BFI-S; SOEP = Socio-Economic Panel) across a broad age range spanning 11-84 years. We used two samples of N = 1,090 children (52% female, M-age = 11.87) and N = 18,789 adults (53% female, M-age = 51.09), estimating a multigroup CFA analysis across four age groups (late childhood: 11-14 years; early adulthood: 17-30 years; middle adulthood: 31-60 years; late adulthood: 61-84 years). Our results indicated the comparability of the personality trait metric in terms of general factor structure, loading patterns, and the majority of intercepts across all age groups. Therefore, the findings suggest both a reliable assessment of the Big Five personality traits with the BFI-S even in late childhood and a vastly comparable metric across age groups.
In many countries, students are asked about their perceptions of teaching in order to make decisions about the further development of teaching practices on the basis of this feedback. The stability of this measurement of teaching quality is a prerequisite for the ability to generalize the results to other teaching situations. The present study aims to expand the extant empirical body of knowledge on the effects of situational factors on the stability of students’ perceptions of teaching quality. Therefore, we investigate whether the degree of stability is moderated by three situational factors: time between assessments, subjects taught by teachers, and students’ grade levels. To this end, we analyzed data from a web-based student feedback system. The study involved 497 teachers, each of whom conducted two student surveys. We examined the differential stability of student perceptions of 16 teaching constructs that were operationalized as latent correlations between aggregated student perceptions of the same teacher’s teaching. Testing metric invariance indicated that student ratings provided measures of teaching constructs that were invariant across time, subjects, and grade levels. Stability was moderated to some extent by grade level but not by subjects taught nor time spacing between surveys. The results provide evidence of the extent to which situational factors may affect the stability of student perceptions of teaching constructs. The generalizability of the students’ feedback results to other teaching situations is discussed.
It is well-documented that academic achievement is associated with students' self-perceptions of their academic abilities, that is, their academic self-concepts. However, low-achieving students may apply self-protective strategies to maintain a favorable academic self-concept when evaluating their academic abilities. Consequently, the relation between achievement and academic self-concept might not be linear across the entire achievement continuum. Capitalizing on representative data from three large-scale assessments (i.e., TIMSS, PIRLS, PISA; N = 470,804), we conducted an integrative data analysis to address nonlinear trends in the relations between achievement and the corresponding self-concepts in mathematics and the verbal domain across 13 countries and 2 age groups (i.e., elementary and secondary school students). Polynomial and interrupted regression analyses showed nonlinear relations in secondary school students, demonstrating that the relations between achievement and the corresponding self-concepts were weaker for lower achieving students than for higher achieving students. Nonlinear effects were also present in younger students, but the pattern of results was rather heterogeneous. We discuss implications for theory as well as for the assessment and interpretation of self-concept.
Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in educational evaluation, teacher accountability, and high-stakes decisions. We analyzed 370 empirical studies on VA modeling, focusing on modeling and methodological issues to identify key factors for improvement. The studies stemmed from 26 countries (68% from the USA). Most studies applied linear regression or multilevel models. Most studies (i.e., 85%) included prior achievement as a covariate, but only 2% included noncognitive predictors of achievement (e.g., personality or affective student variables). Fifty-five percent of the studies did not apply statistical adjustments (e.g., shrinkage) to increase precision in effectiveness estimates, and 88% included no model diagnostics. We conclude that research on VA modeling can be significantly enhanced regarding the inclusion of covariates, model adjustment and diagnostics, and the clarity and transparency of reporting.
What is the added value from attending a certain school or being taught by a certain teacher? To answer this question, the value-added (VA) model was developed. In this model, the actual achievement attained by students attending a certain school or being taught by a certain teacher is juxtaposed with the achievement that is expected for students with the same background characteristics (e.g., pretest scores). To this end, the VA model can be used to compute a VA score for each school or teacher, respectively. If actual achievement is better than expected achievement, there is a positive effect (i.e., a positive VA score) of attending a certain school or being taught by a certain teacher. In other words, VA models have been developed to “make fair comparisons of the academic progress of pupils in different settings” (Tymms 1999, p. 27). Their aim is to operationalize teacher or school effectiveness objectively. Specifically, VA models are often used for accountability purposes and high-stakes decisions (e.g., to allocate financial or personal resources to schools or even to decide which teachers should be promoted or discharged). Consequently, VA modeling is a highly political topic, especially in the USA, where many states have implemented VA or VA-based models for teacher evaluation (Amrein-Beardsley and Holloway 2017; Kurtz 2018). However, this use for high-stakes decisions is highly controversial and researchers seem to disagree concerning the question if VA scores should be used for decision-making (Goldhaber 2015). For a more exhaustive discussion of the use of VA models for accountability reasons, see, for example, Scherrer (2011).
Given the far-reaching impact of VA scores, it is surprising that there is scarcity of systematic reviews of how VA scores are computed, evaluated, and how this research is reported. To this end, we review 370 empirical studies from 26 countries to rigorously examine several key issues in VA modeling, involving (a) the statistical model (e.g., linear regression, multilevel model) that is used, (b) model diagnostics and reported statistical parameters that are used to evaluate the quality of the VA model, (c) the statistical adjustments that are made to overcome methodological challenges (e.g., measurement error of the outcome variables), and (d) the covariates (e.g., pretest scores, students’ sociodemographic background) that are used when estimating expected achievement.
All this information is critical for meeting the transparency standards defined by the American Educational Research Association (AERA 2006). Transparency is vital for educational research in general and especially for highly consequential research, such as VA modeling. First, transparency is highly relevant for researchers. The clearer the description of the model, the easier it is to build upon the knowledge of previous research and to safeguard the potential for replicating previous results. Second, because decisions that are based on VA scores affect teachers’ lives and schools’ futures, not only educational agents but also the general public should be able to comprehend how these scores are calculated to allow for public scrutiny. Specifically, given that VA scores can have devastating consequences on teachers’ lives and on the students they teach, transparency is particularly important to evaluate the chosen methodology to compute VA models for a certain purpose. Such evaluations are essential to answer the question to what extent the quality of VA scores allows to base far-reaching decisions on these scores for accountability purposes.