Refine
Year of publication
- 2023 (301) (remove)
Document Type
- Article (301) (remove)
Language
- English (301) (remove)
Keywords
- digital education (32)
- Digitale Bildung (31)
- Kursdesign (31)
- MOOC (31)
- Micro Degree (31)
- Online-Lehre (31)
- Onlinekurs (31)
- Onlinekurs-Produktion (31)
- e-learning (31)
- micro degree (31)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (45)
- Fachgruppe Betriebswirtschaftslehre (36)
- Extern (26)
- Institut für Biochemie und Biologie (26)
- Fachgruppe Politik- & Verwaltungswissenschaft (24)
- Fachgruppe Volkswirtschaftslehre (21)
- Fachgruppe Soziologie (15)
- Department Sport- und Gesundheitswissenschaften (13)
- Institut für Chemie (13)
- Department Psychologie (12)
Entrepreneurship education research has a strong “output” focus on impact studies but pays much less attention to the “inside” or process perspective of the way entrepreneurship education occurs. In particular, the scattered previous entrepreneurship curriculum research has not managed to provide a current and comprehensive overview of the curricular elements that constitute entrepreneurship education. To overcome this shortcoming, we aim to identify the teaching objectives, teaching contents, teaching methods, and assessment methods discussed in entrepreneurship curriculum research. To this end, we conducted a systematic literature review on the four entrepreneurship curriculum dimensions and collected all mentioned curriculum items. We used a two-stage coding procedure to find the genuinely entrepreneurship-specific items. Among numerous items (also from business management and other subjects), we found 26 objectives, 34 contents, 11 teaching methods, and 7 assessment methods that were entrepreneurship-specific. Most of these items were addressed by only a few scholarly papers.
The Gutenberg-Richter (GR) and the Omori-Utsu (OU) law describe the earthquakes' energy release and temporal clustering and are thus of great importance for seismic hazard assessment. Motivated by experimental results, which indicate stress-dependent parameters, we consider a combined global data set of 127 main shock-aftershock sequences and perform a systematic study of the relationship between main shock-induced stress changes and associated seismicity patterns. For this purpose, we calculate space-dependent Coulomb Stress (& UDelta;CFS) and alternative receiver-independent stress metrics in the surrounding of the main shocks. Our results indicate a clear positive correlation between the GR b-value and the induced stress, contrasting expectations from laboratory experiments and suggesting a crucial role of structural heterogeneity and strength variations. Furthermore, we demonstrate that the aftershock productivity increases nonlinearly with stress, while the OU parameters c and p systematically decrease for increasing stress changes. Our partly unexpected findings can have an important impact on future estimations of the aftershock hazard.
Providing students with efficient instruction tailored to their individual characteristics in the cognitive and affective domains is an important goal in research on computer-based learning. This is especially important when seeking to enhance students' learning experience, such as by counteracting boredom, a detrimental emotion for learning. However, studies comparing instructional strategies triggered by either cognitive or emotional characteristics are surprisingly scarce. In addition, little research has examined the impact of these types of instructional strategies on performance and boredom trajectories within a lesson. In the present study, we compared the effectiveness of an intelligent tutoring system that adapted variable levels of hint details to a combination of students' dynamic, self-reported emotions and task performance (i.e., the experimental condition) to a traditional hint delivery approach consisting of a progressive, incremental supply of details following students' failures (i.e., the control condition). Linear mixed models of time-related changes in task performance and the intensity of boredom over two 1-h sessions showed that students (N = 104) in the two conditions exhibited equivalent progression in task performance and similar trajectories in boredom intensity. However, a consideration of students' achievement levels in the analyses (i.e., their final performance on the task) revealed that higher achievers in the experimental condition showed a reduction in boredom during the first session, suggesting possible benefits of using emotional information to increase the contingency of the hint delivery strategy and improve students’ learning experience.
Purpose
This study investigates the communication behavior of public health organizations on Twitter during the COVID-19 vaccination campaign in Brazil. It contributes to the understanding of the organizational framing of health communication by showcasing several instances of framing devices that borrow from (Brazilian) internet culture. The investigation of this case extends the knowledge by providing a rich description of the organizational framing of health communication to combat misinformation in a politically charged environment.
Design/methodology/approach
The authors collected a Twitter dataset of 77,527 tweets and analyzed a purposeful subsample of 536 tweets that contained information provided by Brazilian public health organizations about COVID-19 vaccination campaigns. The data analysis was carried out quantitatively and qualitatively by combining social media analytics techniques and frame analysis.
Findings
The analysis showed that Brazilian health organizations used several framing devices that have been identified by previous literature such as hashtags, links, emojis or images. However, the analysis also unearthed hitherto unknown visual framing devices for misinformation prevention and debunking that borrow from internet culture such as “infographics,” “pop culture references” and “internet-native symbolism.”
Research limitations/implications
First, the identification of framing devices relating to internet culture add to our understanding of the so far little addressed framing of misinformation combat messages. The case of Brazilian health organizations provides a novel perspective to knowledge by offering a notion of internet-native symbols (e.g. humor, memes) and popular culture references for misinformation combat, including misinformation prevention. Second, this study introduces a frontier of political contextualization to misinformation research that does not relate to the partisanship of the spreaders but that relates to the political dilemmas of public organizations with a commitment to provide accurate information to citizens.
Practical implications
The findings inform decision-makers and public health organizations about framing devices that are tailored to internet-native audiences and can guide strategies to carry out information campaigns in misinformation-laden social media environments.
Social implications
The findings of this case study expose the often-overlooked cultural peculiarities of framing information campaigns on social media. The report of this study from a country in the Global South helps to contrast several assumptions and strategies that are prevalent in (health) discourses in Western societies and scholarship.
Originality/value
This study uncovers unconventional and barely addressed framing devices of health organizations operating in Brazil, which provides a novel perspective to the body of research on misinformation. It contributes to existing knowledge about frame analysis and broadens the understanding of frame devices borrowing from internet culture. It is a call for a frontier in misinformation research that deals with internet culture as part of organizational strategies for successful misinformation combat.
This study focuses on three key aspects: (a) crude throat swab samples in a viral transport medium (VTM) as templates for RT-LAMP reactions; (b) a biotinylated DNA probe with enhanced specificity for LFA readouts; and (c) a digital semi-quantification of LFA readouts. Throat swab samples from SARS-CoV-2 positive and negative patients were used in their crude (no cleaning or pre-treatment) forms for the RT-LAMP reaction. The samples were heat-inactivated but not treated for any kind of nucleic acid extraction or purification. The RT-LAMP (20 min processing time) product was read out by an LFA approach using two labels: FITC and biotin. FITC was enzymatically incorporated into the RT-LAMP amplicon with the LF-LAMP primer, and biotin was introduced using biotinylated DNA probes, specifically for the amplicon region after RT-LAMP amplification. This assay setup with biotinylated DNA probe-based LFA readouts of the RT-LAMP amplicon was 98.11% sensitive and 96.15% specific. The LFA result was further analysed by a smartphone-based IVD device, wherein the T-line intensity was recorded. The LFA T-line intensity was then correlated with the qRT-PCR Ct value of the positive swab samples. A digital semi-quantification of RT-LAMP-LFA was reported with a correlation coefficient of R2 = 0.702. The overall RT-LAMP-LFA assay time was recorded to be 35 min with a LoD of three RNA copies/µL (Ct-33). With these three advancements, the nucleic acid testing-point of care technique (NAT-POCT) is exemplified as a versatile biosensor platform with great potential and applicability for the detection of pathogens without the need for sample storage, transportation, or pre-processing.
The physiological dependence of animals on dietary intake of vitamins, amino acids, and fatty acids is ubiquitous. Sharp differences in the availability of these vital dietary biomolecules among different resources mean that consumers must adopt a range of strategies to meet their physiological needs. We review the emerging work on omega-3 long-chain polyunsaturated fatty acids, focusing predominantly on predator-prey interactions, to illustrate that trade-off between capacities to consume resources rich in vital biomolecules and internal synthesis capacity drives differences in phenotype and fitness of consumers. This can then feedback to impact ecosystem functioning. We outline how focus on vital dietary biomolecules in eco-eco-devo dynamics can improve our understanding of anthropogenic changes across multiple levels of biological organization.
The study addresses the question, if observed changes in terms of Arctic-midlatitude linkages during winter are driven by Arctic Sea ice decline alone or if the increase of global sea surface temperatures plays an additional role. We compare atmosphere-only model experiments with ECHAM6 to ERA-Interim Reanalysis data. The model sensitivity experiment is implemented as a set of four combinations of sea ice and sea surface temperature boundary conditions. Atmospheric circulation regimes are determined and evaluated in terms of their cyclone and blocking characteristics and changes in frequency during winter. As a prerequisite, ECHAM6 reproduces general features of circulation regimes very well. Tropospheric changes induced by the change of boundary conditions are revealed and further impacts on the large-scale circulation up into the stratosphere are investigated. In early winter, the observed increase of atmospheric blocking in the region between Scandinavia and the Urals are primarily related to the changes in sea surface temperatures. During late winter, we f nd a weakened polar stratospheric vortex in the reanalysis that further impacts the troposphere. In the model sensitivity study a climatologically weakened polar vortex occurs only if sea ice is reduced and sea surface temperatures are increased together. This response is delayed compared to the reanalysis. The tropospheric response during late winter is inconclusive in the model, which is potentially related to the weak and delayed response in the stratosphere. The model experiments do not reproduce the connection between early and late winter as interpreted from the reanalysis. Potentially explaining this mismatch, we identify a discrepancy of ECHAM6 to reproduce the weakening of the stratospheric polar vortex through blocking induced upward propagation of planetary waves.
This paper deals with the long-term behavior of positive operator semigroups on spaces of bounded functions and of signed measures, which have applications to parabolic equations with unbounded coefficients and to stochas-tic analysis. The main results are a Tauberian type theorem characterizing the convergence to equilibrium of strongly Feller semigroups and a generalization of a classical convergence theorem of Doob. None of these results requires any kind of time regularity of the semigroup.
In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers.
We consider a system of noninteracting particles on a line with initial positions distributed uniformly with density ? on the negative half-line. We consider two different models: (i) Each particle performs independent Brownian motion with stochastic resetting to its initial position with rate r and (ii) each particle performs run -and-tumble motion, and with rate r its position gets reset to its initial value and simultaneously its velocity gets randomized. We study the effects of resetting on the distribution P(Q, t) of the integrated particle current Q up to time t through the origin (from left to right). We study both the annealed and the quenched current distributions and in both cases, we find that resetting induces a stationary limiting distribution of the current at long times. However, we show that the approach to the stationary state of the current distribution in the annealed and the quenched cases are drastically different for both models. In the annealed case, the whole distribution P-an(Q, t) approaches its stationary limit uniformly for all Q. In contrast, the quenched distribution P-qu(Q, t) attains its stationary form for Q < Q(crit)(t), while it remains time dependent for Q > Q(crit)(t). We show that Q(crit)(t) increases linearly with t for large t. On the scale where Q <; Q(crit)(t), we show that P-qu(Q, t) has an unusual large deviation form with a rate function that has a third-order phase transition at the critical point. We have computed the associated rate functions analytically for both models. Using an importance sampling method that allows to probe probabilities as tiny as 10-14000, we were able to compute numerically this nonanalytic rate function for the resetting Brownian dynamics and found excellent agreement with our analytical prediction.