Refine
Has Fulltext
- no (3)
Year of publication
- 2020 (3) (remove)
Document Type
- Article (3)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- ability differentiation (1)
- adolescence (1)
- age differentiation (1)
- comparison (1)
- factor analysis (1)
- instructional quality (1)
- intelligence (1)
- longitudinal data (1)
- machine learning (1)
- model (1)
- nonlinear (1)
- school composition (1)
- school effectiveness (1)
- school quality (1)
- value-added modeling (1)
Institute
The aim of educational policy should be to provide a good education to all students. Thus, a key question arises regarding the extent to which key characteristics of school composition (proportion of students with migration background, socioeconomic status [SES], prior school achievement, and achievement heterogeneity), instructional quality, school quality, and later school achievement are interrelated. The present study addressed this research question by examining school inspection data, official school statistics, and large-scale achievement data from all primary schools in Berlin, Germany (N = 343). The results of correlation and path analyses showed that school composition (average SES, average prior school achievement) predicted components of instructional quality (SES: classroom management, cognitive activation; achievement: cognitive activation, individual learning support). The relation between school composition characteristics and most components of school quality was close to zero. Contrary to our expectations, only the effect of school SES on later achievement was mediated by instructional quality.
Differentiation of intelligence refers to changes in the structure of intelligence that depend on individuals' level of general cognitive ability (ability differentiation hypothesis) or age (developmental differentiation hypothesis). The present article aimed to investigate ability differentiation, developmental differentiation, and their interaction with nonlinear factor analytic models in 2 studies. Study 1 was comprised of a nationally representative sample of 7,127 U.S. students (49.4% female; M-age = 14.51, SD = 1.42, range = 12.08-17.00) who completed the computerized adaptive version of the Armed Service Vocational Aptitude Battery. Study 2 analyzed the norming sample of the Berlin Intelligence Structure Test with 1,506 German students (44% female; M-age = 14.54, SD = 1.35, range = 10.00-18.42). Results of Study 1 supported the ability differentiation hypothesis but not the developmental differentiation hypothesis. Rather, the findings pointed to age-dedifferentiation (i.e., higher correlations between different abilities with increasing age). There was evidence for an interaction between age and ability differentiation, with greater ability differentiation found for older adolescents. Study 2 provided little evidence for ability differentiation but largely replicated the findings for age dedifferentiation and the interaction between age and ability differentiation. The present results provide insight into the complex dynamics underlying the development of intelligence structure during adolescence. Implications for the assessment of intelligence are discussed.
There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed.