Refine
Year of publication
Document Type
- Article (18)
- Doctoral Thesis (3)
- Postprint (1)
Is part of the Bibliography
- yes (22)
Keywords
- analysis (22) (remove)
Institute
- Institut für Physik und Astronomie (4)
- Institut für Geowissenschaften (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Department Erziehungswissenschaft (2)
- Department Linguistik (2)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Biochemie und Biologie (2)
- Wirtschaftswissenschaften (2)
- Hasso-Plattner-Institut für Digital Engineering GmbH (1)
- Institut für Informatik und Computational Science (1)
The Wisconsin Card Sorting Test (WCST) is used to test higher-level executive functions or switching, depending on the measures chosen in a study and its goal. Many measures can be extracted from the WCST, but how to assign them to specific cognitive skills remains unclear. Thus, the current study first aimed at identifying which measures test the same cognitive abilities. Second, we compared the performance of mono- and multilingual children in the identified abilities because there is some evidence that bilingualism can improve executive functions. We tested 66 monolingual and 56 multilingual (i.e., bi- and trilingual) primary school children (M-age = 109 months) in an online version of the classic WCST. A principal component analysis revealed four factors: problem-solving, monitoring, efficient errors, and perseverations. Because the assignment of measures to factors is only partially coherent across the literature, we identified this as one of the sources of task impurity. In the second part, we calculated regression analyses to test for group differences while controlling for intelligence as a predictor for executive functions and for confounding variables such as age, German lexicon size, and socioeconomic status. Intelligence predicted problem solving and perseverations. In the monitoring component (measured by the reaction times preceding a rule switch), multilinguals outperformed monolinguals, thereby supporting the view that bi- or multilingualism can improve processing speed related to monitoring.
Answer Set Programming (ASP) is a paradigm for modeling and solving problems for knowledge representation and reasoning. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like propositional satisfiability (SAT), and even detailed parameterized complexity landscapes. A generic parameter in parameterized complexity originating from graph theory is the socalled treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there are translations from (normal) ASP to SAT, no reduction that preserves treewidth or at least keeps track of the treewidth increase is known. In this paper we propose a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Further, we show a new result establishing that, when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. Finally, we present an empirical study of our novel reduction from normal ASP to SAT, where we compare treewidth upper bounds that are obtained via known decomposition heuristics. Overall, our reduction works better with these heuristics than existing translations. (c) 2021 Elsevier B.V. All rights reserved.
The identification of vulnerabilities in IT infrastructures is a crucial problem in enhancing the security, because many incidents resulted from already known vulnerabilities, which could have been resolved. Thus, the initial identification of vulnerabilities has to be used to directly resolve the related weaknesses and mitigate attack possibilities. The nature of vulnerability information requires a collection and normalization of the information prior to any utilization, because the information is widely distributed in different sources with their unique formats. Therefore, the comprehensive vulnerability model was defined and different sources have been integrated into one database. Furthermore, different analytic approaches have been designed and implemented into the HPI-VDB, which directly benefit from the comprehensive vulnerability model and especially from the logical preconditions and postconditions.
Firstly, different approaches to detect vulnerabilities in both IT systems of average users and corporate networks of large companies are presented. Therefore, the approaches mainly focus on the identification of all installed applications, since it is a fundamental step in the detection. This detection is realized differently depending on the target use-case. Thus, the experience of the user, as well as the layout and possibilities of the target infrastructure are considered. Furthermore, a passive lightweight detection approach was invented that utilizes existing information on corporate networks to identify applications.
In addition, two different approaches to represent the results using attack graphs are illustrated in the comparison between traditional attack graphs and a simplistic graph version, which was integrated into the database as well. The implementation of those use-cases for vulnerability information especially considers the usability. Beside the analytic approaches, the high data quality of the vulnerability information had to be achieved and guaranteed. The different problems of receiving incomplete or unreliable information for the vulnerabilities are addressed with different correction mechanisms. The corrections can be carried out with correlation or lookup mechanisms in reliable sources or identifier dictionaries. Furthermore, a machine learning based verification procedure was presented that allows an automatic derivation of important characteristics from the textual description of the vulnerabilities.
The sharing economy
(2020)
Purpose Quantitative bibliometric approaches were used to statistically and objectively explore patterns in the sharing economy literature. Design/methodology/approach Journal (co-)citation analysis, author (co-)citation analysis, institution citation and co-operation analysis, keyword co-occurrence analysis, document (co-)citation analysis and burst detection analysis were conducted based on a bibliometric data set relating to sharing economy publications. Findings Sharing economy research is multi- and interdisciplinary. Journals focused upon products liability, organizing framework, profile characteristics, diverse economies, consumption system and everyday life themes. Authors focused upon profile characteristics, sharing economy organization, social connections, first principle and diverse economy themes. No institution dominated the research field. Keyword co-occurrence analysis identified organizing framework, tourism industry, consumer behavior, food waste, generous exchange and quality cue as research themes. Document co-citation analysis found research themes relating to the tourism industry, exploring public acceptability, agri-food system, commercial orientation, products liability and social connection. Most cited authors, institutions and documents are reported. Research limitations/implications The study did not exclusively focus on publications in top-tier journals. Future studies could run analyses relating to top-tier journals alone, and then run analyses relating to less renowned journals alone. To address the potential fuzzy results concern, reviews could focus on business and/or management research alone. Longitudinal reviews conducted over several points in time are warranted. Future reviews could combine qualitative and quantitative approaches. Originality/value We contribute by analyzing information relating to the population of all sharing economy articles. In addition, we contribute by employing several quantitative bibliometric approaches that enable the identification of trends relating to the themes and patterns in the growing literature.
Teacher self-efficacy and teacher interest are two key facets of teacher motivation that are important for highquality teaching. Little is known about the relative strength of the effects of teacher self-efficacy and interest on teaching quality when compared with one another. We extend previous research on teacher motivation by examining the relations linking mathematics teacher self-efficacy and interest with several relevant dimensions of teaching quality as perceived by teachers and students. Participants were 84 mathematics teachers (61.2% female) and their students (1718 students; 48.5% girls). Based on doubly latent multilevel models, we found that teacher-reported self-efficacy in instruction was positively related to teacher-reported cognitive activation, classroom management, and emotional support in mathematics classrooms. Teacher-reported educational interest showed positive associations with both student- and teacher-perceived emotional support. Future research is advised to focus more strongly on the unique relations between different teachers' motivational characteristics and relevant dimensions of teaching quality.
This study deals with the East Beni Suef Basin (Eastern Desert, Egypt) and aims to evaluate the source-generative potential, reconstruct the burial and thermal history, examine the most influential parameters on thermal maturity modeling, and improve on the models already published for the West Beni Suef to ultimately formulate a complete picture of the whole basin evolution.
Source rock evaluation was carried out based on TOC, Rock-Eval pyrolysis, and visual kerogen petrography analyses. Three kerogen types (II, II/III, and III) are distinguished in the East Beni Suef Basin, where the Abu Roash "F" Member acts as the main source rock with good to excellent source potential, oil-prone mainly type II kerogen, and immature to marginal maturity levels.
The burial history shows four depositional and erosional phases linked with the tectonic evolution of the basin. A hiatus (due to erosion or non-deposition) has occurred during the Late Eocene-Oligocene in the East Beni Suef Basin, while the West Beni Suef Basin has continued subsiding.
Sedimentation began later (Middle to Late Albian) with lower rates in the East Beni Suef Basin compared with the West Beni Suef Basin (Early Albian). The Abu Roash "F" source rock exists in the early oil window with a present-day transformation ratio of about 19% and 21% in the East and West Beni Suef Basin, respectively, while the Lower Kharita source rock, which is only recorded in the West Beni Suef Basin, has reached the late oil window with a present-day transformation ratio of about 70%.
The magnitude of erosion and heat flow have proportional and mutual effects on thermal maturity.
We present three possible scenarios of basin modeling in the East Beni Suef Basin concerning the erosion from the Apollonia and Dabaa formations.
Results of this work can serve as a basis for subsequent 2D and/or 3D basin modeling, which are highly recommended to further investigate the petroleum system evolution of the Beni Suef Basin.
A numerical framework is developed to study the hysteresis of elastic properties of porous ceramics as a function of temperature. The developed numerical model is capable of employing experimentally measured crystallographic orientation distribution and coefficient of thermal expansion values. For realistic modeling of the microstructure, Voronoi polygons are used to generate polycrystalline grains. Some grains are considered as voids, to simulate the material porosity. To model intercrystalline cracking, cohesive elements are inserted along grain boundaries. Crack healing (recovery of the initial properties) upon closure is taken into account with special cohesive elements implemented in the commercial code ABAQUS. The numerical model can be used to estimate fracture properties governing the cohesive behavior through inverse analysis procedure. The model is applied to a porous cordierite ceramic. The obtained fracture properties are further used to successfully simulate general non-linear macroscopic stress-strain curves of cordierite, thereby validating the model.
Sonority is a fundamental notion in phonetics and phonology, central to many descriptions of the syllable and various useful predictions in phonotactics. Although widely accepted, sonority lacks a clear basis in speech articulation or perception, given that traditional formal principles in linguistic theory are often exclusively based on discrete units in symbolic representation and are typically not designed to be compatible with auditory perception, sensorimotor control, or general cognitive capacities. In addition, traditional sonority principles also exhibit systematic gaps in empirical coverage. Against this backdrop, we propose the incorporation of symbol-based and signal-based models to adequately account for sonority in a complementary manner. We claim that sonority is primarily a perceptual phenomenon related to pitch, driving the optimization of syllables as pitch-bearing units in all language systems. We suggest a measurable acoustic correlate for sonority in terms of periodic energy, and we provide a novel principle that can account for syllabic well-formedness, the nucleus attraction principle (NAP). We present perception experiments that test our two NAP-based models against four traditional sonority models, and we use a Bayesian data analysis approach to test and compare them. Our symbolic NAP model outperforms all the other models we test, while our continuous bottom-up NAP model is at second place, along with the best performing traditional models. We interpret the results as providing strong support for our proposals: (i) the designation of periodic energy as the acoustic correlate of sonority; (ii) the incorporation of continuous entities in phonological models of perception; and (iii) the dual-model strategy that separately analyzes symbol-based top-down processes and signal-based bottom-up processes in speech perception.
Stars are uniform spheres, but only to first order. The way in which stellar rotation and magnetism break this symmetry places important observational constraints on stellar magnetic fields, and factors in the assessment of the impact of stellar activity on exoplanet atmospheres. The spatial distribution of flares on the solar surface is well known to be nonuniform, but elusive on other stars. We briefly review the techniques available to recover the loci of stellar flares, and highlight a new method that enables systematic flare localization directly from optical light curves. We provide an estimate of the number of flares we may be able to localize with the Transiting Exoplanet Survey Satellite, and show that it is consistent with the results obtained from the first full sky scan of the mission. We suggest that nonuniform flare latitude distributions need to be taken into account in accurate assessments of exoplanet habitability.
Effective professional development programs (PDPs) rely on well-defined goals. However, recent studies on PDPs have not explored the goals from a multi-stakeholder perspective. This study identifies the most important learning goals of PDPs at science research institutions as perceived by four groups of stakeholders, namely teachers, education researchers, government representatives, and research scientists. Altogether, over 100 stakeholders from 42 countries involved in PDPs at science research institutions in Europe and North America participated in a three-round Delphi study. In the first round, the stakeholders provided their opinions on what they thought the learning goals of PDPs should be through an open-ended questionnaire. In the second and third rounds, the stakeholders assessed the importance of the learning goals that emerged from the first round by rating and ranking them, respectively. The outcome of the study is a hierarchical list of the ten most important learning goals of PDPs at particle physics laboratories. The stakeholders identified enhancing teachers' knowledge of scientific concepts and models and enhancing their knowledge of the curricula as the most important learning goals. Furthermore, the results show strong agreement between all the stakeholder groups regarding the defined learning goals. Indeed, all groups ranked the learning goals by their perceived importance almost identically. These outcomes could help policymakers establish more specific policies for PDPs. Additionally, they provide PDP practitioners at science research institutions with a solid base for future research and planning endeavors.