Refine
Year of publication
- 2016 (2358) (remove)
Document Type
- Article (1449)
- Doctoral Thesis (322)
- Postprint (216)
- Monograph/Edited Volume (76)
- Other (73)
- Review (72)
- Part of a Book (69)
- Part of Periodical (23)
- Preprint (18)
- Master's Thesis (12)
Language
- English (1796)
- German (515)
- Russian (23)
- Spanish (12)
- French (6)
- Italian (4)
- Multiple languages (1)
- Portuguese (1)
Keywords
- Migration (15)
- migration (15)
- religion (15)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- German (11)
- confusions and misunderstandings (11)
- climate change (10)
- Germany (8)
Institute
- Institut für Geowissenschaften (292)
- Institut für Biochemie und Biologie (291)
- Institut für Physik und Astronomie (258)
- Institut für Chemie (213)
- Institut für Ernährungswissenschaft (84)
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (76)
- Institut für Mathematik (71)
- Department Psychologie (70)
- Wirtschaftswissenschaften (61)
Ever since our first research into Alexander von Humboldt's stay in Spain, the absence of an ensuing relationship between the wise Prussian and the Spanish Crown and Authorities had always surprised us. On starting new research, we found that indeed he sent his first work to Carlos IV from Rome accompanied by a letter of gratitude for the protection he had received during his American trip and submission to the Spanish Crown, which we now present. This first literary fruit of his voyage, which Alexander von Humboldt alluded to in the letter is the first instalment of his work Plantes Équinoxiales, Recueillies au Mexique, dans l’ile de Cuba, dans les provinces de Caracas, de Cumana etc., published in Paris in 1805.
Walking while concurrently performing cognitive and/or motor interference tasks is the norm rather than the exception during everyday life and there is evidence from behavioral studies that it negatively affects human locomotion. However, there is hardly any information available regarding the underlying neural correlates of single- and dual-task walking. We had 12 young adults (23.8 ± 2.8 years) walk while concurrently performing a cognitive interference (CI) or a motor interference (MI) task. Simultaneously, neural activation in frontal, central, and parietal brain areas was registered using a mobile EEG system. Results showed that the MI task but not the CI task affected walking performance in terms of significantly decreased gait velocity and stride length and significantly increased stride time and tempo-spatial variability. Average activity in alpha and beta frequencies was significantly modulated during both CI and MI walking conditions in frontal and central brain regions, indicating an increased cognitive load during dual-task walking. Our results suggest that impaired motor performance during dual-task walking is mirrored in neural activation patterns of the brain. This finding is in line with established cognitive theories arguing that dual-task situations overstrain cognitive capabilities resulting in motor performance decrements.
Walking while concurrently performing cognitive and/or motor interference tasks is the norm rather than the exception during everyday life and there is evidence from behavioral studies that it negatively affects human locomotion. However, there is hardly any information available regarding the underlying neural correlates of single- and dual-task walking. We had 12 young adults (23.8 ± 2.8 years) walk while concurrently performing a cognitive interference (CI) or a motor interference (MI) task. Simultaneously, neural activation in frontal, central, and parietal brain areas was registered using a mobile EEG system. Results showed that the MI task but not the CI task affected walking performance in terms of significantly decreased gait velocity and stride length and significantly increased stride time and tempo-spatial variability. Average activity in alpha and beta frequencies was significantly modulated during both CI and MI walking conditions in frontal and central brain regions, indicating an increased cognitive load during dual-task walking. Our results suggest that impaired motor performance during dual-task walking is mirrored in neural activation patterns of the brain. This finding is in line with established cognitive theories arguing that dual-task situations overstrain cognitive capabilities resulting in motor performance decrements.
Naturaleza y cultura
(2016)
El presente trabajo gira en torno al inexpugnable vínculo entre naturaleza y cultura y la 'no naturalidad' de la primera, producto de las milenarias intervenciones del hombre, subsumido bajo el término del 'antropoceno'. Los filósofos franceses Bruno Latour y Philippe Descola supieron destacar, aunque por caminos diferentes, la importancia de este nexo para asegurar la supervivencia del hombre; Bruno Latour centra sus reflexiones en la política de la naturaleza y Philippe Descola destaca el carácter ecológico de la naturaleza y la cultura. Sin embargo, ambos dejan de lado las literaturas del mundo y su capacidad de atesorar los diversos diseños del saber convivir entre hombre y naturaleza y las nociones de sustentabilidad. Descuella además la inspiración que Descola encuentra en la figura del gran erudito Alexander von Humboldt, quien en el siglo XIX ya daba fe de la relación inextricable entre naturaleza y cultura en innumerables testimonios, entre otros, el Chimborazo que, como cuadro global es representativo para entender que la naturaleza desde siempre ha sido cultura y la cultura es inimaginble sin la naturaleza.
Dieser Beitrag behandelt einen bisher unveröffentlichen Brief Alexander von Humboldts an Thomas Jefferson. Der Brief bietet uns einen aufschlussreichen Einblick in die persönlichen, politischen und wissenschaftlichen Netzwerke Humboldts und schließt eine Lücke in der Humboldt-Jefferson-Korrespondenz.
Regionale Unterschiede der Inanspruchnahme von Präventionsleistungen in der ambulanten Versorgung
(2016)
Das Ziel dieser Studie war es die regionalen Unterschiede der Inanspruchnahme sekundärpräventiver Leistungen in Deutschland auf Kreisebene zu analysieren. Hierbei sollte eine Lücke in der deutschen Forschung geschlossen werden, indem neben individuellen Faktoren auch ökologische Faktoren durch einen Mehrebenenansatz einbezogen wurden. Auf ökologischer Ebene wurde die Effekte der regionalen sozialen Deprivation, der Urbanisierung und der Arztdichte der ambulanten Ärzte analysiert. Variablen auf Individualebene waren Geschlecht und Gesundheitsstatus.
In der Studie wurden drei verschiedene Datenbanken miteinander verknüpft. Zur Berechnung der regionalen sozialen Deprivation und der Urbanisierung wurden Daten von INKAR für alle 402 Kreise verwendet. Das Bundesarztregister lieferte die Datengrundlage zur Bestimmung der Arztdichte. Die Abrechnungsdaten aller Kassenärztlichen Vereinigungen nach § 295 SGB V lieferten die Zahlen für die Inanspruchnahme der spezifischen Präventionsangebote als auch für Geschlecht und Gesundheitsstatus. Hierdurch war es möglich eine Vollerhebung aller gesetzlich Krankenversicherten zwischen 50 und 55 Jahren durchzuführen, die 2013 einen Arzt aufgesucht haben (N = 6,6 Mio.). Die unabhängigen Variablen der regionalen sozialen Deprivation und Urbanisierung sowie die Kontrollvariable Gesundheitsstatus wurden mit Hilfe der Faktorenanalyse gebildet. Um die regionalen Unterschiede analysieren zu können, wurde eine hierarchische multivariate Regression durchgeführt.
Rund 80% aller sekundärpräventiven Leistungen wurden von Frauen in Anspruch genommen. Ein schlechterer Gesundheitsstatus war mit einer höheren Rate der Inanspruchnahme assoziiert. Die Ergebnisse weisen auf regionale Unterschiede hin, die sich nach Geschlecht unterscheiden wobei die unabhängigen Variablen nur kleine Effekte aufweisen. Entgegen der Hypothese war eine höhere regionale soziale Deprivation mit einer höheren Inanspruchnahme bei Männern und Frauen assoziiert. Urbanität war bei Männern positiv und bei Frauen negativ mit der Inanspruchnahme assoziiert. Die Interaktion beider Variablen hat keinen Effekt auf Männer aber einen negativen Effekt auf Frauen. Die Arztdichte wurde aus dem finalen statistischen Modell ausgeschlossen, da die Variable Multikollinearität aufwies.
Bisherige Theorien sind nicht in der Lage die Ergebnisse zu erklären, da sie bisherigen Forschungsergebnissen widersprechen. Zusätzliche Berechnungen legen die Schlussfolgerung nahe, dass die herrschenden Ost-West-Unterschiede zu einer Konfundierung der Ergebnisse geführt haben. Berücksichtigt man das Alter der Patienten, kann vermutet werden, dass die Sozialisation der Inanspruchnahme sekundärpräventiver Leistungen in der DDR bis heute das Gesundheitsverhalten beeinflusst. Allerdings sind weitere Forschungen notwendig um die Gründe für die regionalen Unterschiede der Inanspruchnahme sekundärpräventiver Leistungen besser zu verstehen.
In recent years, entire industries and their participants have been affected by disruptive technologies, resulting in dramatic market changes and challenges to firm’s business logic and thus their business models (BMs). Firms from mature industries are increasingly realizing that BMs that worked successfully for years have become insufficient to stay on track in today’s “move fast and break things” economy. Firms must scrutinize the core logic that informs how they do business, which means exploring novel ways to engage customers and get them to pay. This can lead to a complete renewal of existing BMs or innovating completely new BMs.
BMs have emerged as a popular object of research within the last decade. Despite the popularity of the BM, the theoretical and empirical foundation underlying the concept is still weak. In particular, the innovation process for BMs has been developed and implemented in firms, but understanding of the mechanisms behind it is still lacking. Business model innovation (BMI) is a complex and challenging management task that requires more than just novel ideas. Systematic studies to generate a better understanding of BMI and support incumbents with appropriate concepts to improve BMI development are in short supply. Further, there is a lack of knowledge about appropriate research practices for studying BMI and generating valid data sets in order to meet expectations in both practice and academia.
This paper-based dissertation aims to contribute to research practice in the field of BM and BMI and foster better understanding of the BM concept and BMI processes in incumbent firms from mature industries. The overall dissertation presents three main results. The first result is a new perspective, or the systems thinking view, on the BM and BMI. With the systems thinking view, the fuzzy BM concept is clearly structured and a BMI framework is proposed. The second result is a new research strategy for studying BMI. After analyzing current research practice in the areas of BMs and BMI, it is obvious that there is a need for better research on BMs and BMI in terms of accuracy, transparency, and practical orientation. Thus, the action case study approach combined with abductive methodology is proposed and proven in the research setting of this thesis. The third result stems from three action case studies in incumbent firms from mature industries employed to study how BMI occurs in practice. The new insights and knowledge gained from the action case studies help to explain BMI in such industries and increase understanding of the core of these processes.
By studying these issues, the articles complied in this thesis contribute conceptually and empirically to the recently consolidated but still increasing literature on the BM and BMI. The conclusions and implications made are intended to foster further research and improve managerial practices for achieving BMI in a dramatically changing business environment.
Dependency Resolution Difficulty Increases with Distance in Persian Separable Complex Predicates
(2016)
Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element—a noun in the current study—and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.
Dependency Resolution Difficulty Increases with Distance in Persian Separable Complex Predicates
(2016)
Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element—a noun in the current study—and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for
the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on
a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and
their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited
working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes.
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and
their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes.
We tested the influence of two light intensities [40 and 300 μmol PAR / (m2s)] on the fatty acid composition of three distinct lipid classes in four freshwater phytoplankton species. We chose species of different taxonomic classes in order to detect potentially similar reaction characteristics that might also be present in natural phytoplankton communities. From samples of the bacillariophyte Asterionella formosa, the chrysophyte Chromulina sp., the cryptophyte Cryptomonas ovata and the zygnematophyte Cosmarium botrytis we first separated glycolipids (monogalactosyldiacylglycerol, digalactosyldiacylglycerol, and sulfoquinovosyldiacylglycerol), phospholipids (phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and phosphatidylserine) as well as non-polar lipids (triacylglycerols), before analyzing the fatty acid composition of each lipid class. High variation in the fatty acid composition existed among different species. Individual fatty acid compositions differed in their reaction to changing light intensities in the four species. Although no generalizations could be made for species across taxonomic classes, individual species showed clear but small responses in their ecologically-relevant omega-3 and omega-6 polyunsaturated fatty acids (PUFA) in terms of proportions and of per tissue carbon quotas. Knowledge on how lipids like fatty acids change with environmental or culture conditions is of great interest in ecological food web studies, aquaculture, and biotechnology, since algal lipids are the most important sources of omega-3 long-chain PUFA for aquatic and terrestrial consumers, including humans.
We tested the influence of two light intensities [40 and 300 μmol PAR / (m2s)] on the fatty acid composition of three distinct lipid classes in four freshwater phytoplankton species. We chose species of different taxonomic classes in order to detect potentially similar reaction characteristics that might also be present in natural phytoplankton communities. From samples of the bacillariophyte Asterionella formosa, the chrysophyte Chromulina sp., the cryptophyte Cryptomonas ovata and the zygnematophyte Cosmarium botrytis we first separated glycolipids (monogalactosyldiacylglycerol, digalactosyldiacylglycerol, and sulfoquinovosyldiacylglycerol), phospholipids (phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and phosphatidylserine) as well as non-polar lipids (triacylglycerols), before analyzing the fatty acid composition of each lipid class. High variation in the fatty acid composition existed among different species. Individual fatty acid compositions differed in their reaction to changing light intensities in the four species. Although no generalizations could be made for species across taxonomic classes, individual species showed clear but small responses in their ecologically-relevant omega-3 and omega-6 polyunsaturated fatty acids (PUFA) in terms of proportions and of per tissue carbon quotas. Knowledge on how lipids like fatty acids change with environmental or culture conditions is of great interest in ecological food web studies, aquaculture, and biotechnology, since algal lipids are the most important sources of omega-3 long-chain PUFA for aquatic and terrestrial consumers, including humans.
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
Seit Jahrhunderten dienen die Körper der Frauen als Schlachtfelder. Doch erst vor 20 Jahren kam das Thema sexuelle Gewalt in bewaffneten Konflikten auf internationaler Ebene auf. Die Autorin untersucht den Beitrag der Vereinten Nationen zur Vorbeugung und Repression von sexueller Gewalt im Krieg. Ziel war es, eine Gesamtbestandaufnahme der ausgewählten Wege zum Schutz der Frauen vor sexueller Gewalt im Konflikt in den Bereichen 'Protection, Prevention und Prosecution' durchzuführen. Dies erfolgt anhand der Auswertung der Rechtsprechung des ICTY, ICTR, SCSL und des IStGH sowie der Durchführung der UN Action against sexual violence in conflict, der Arbeit der Human Rights Bodies und der afrikanischen Organisationen. Die Bekämpfung sexueller Gewalt im Krieg bleibt nach wie vor ein langwieriger Weg. Doch wo früher sachgerechte Normen gefehlt haben, wurden solide Grundlagen in den drei Bereichen geschaffen.
The gravitational field of a laser pulse of finite lifetime, is investigated in the framework of linearized gravity. Although the effects are very small, they may be of fundamental physical interest. It is shown that the gravitational field of a linearly polarized light pulse is modulated as the norm of the corresponding electric field strength, while no modulations arise for circular polarization. In general, the gravitational field is independent of the polarization direction. It is shown that all physical effects are confined to spherical shells expanding with the speed of light, and that these shells are imprints of the spacetime events representing emission and absorption of the pulse. Nearby test particles at rest are attracted towards the pulse trajectory by the gravitational field due to the emission of the pulse, and they are repelled from the pulse trajectory by the gravitational field due to its absorption. Examples are given for the size of the attractive effect. It is recovered that massless test particles do not experience any physical effect if they are co-propagating with the pulse, and that the acceleration of massless test particles counter-propagating with respect to the pulse is four times stronger than for massive particles
at rest. The similarities between the gravitational effect of a laser pulse and Newtonian gravity in two dimensions are pointed out. The spacetime curvature close to the pulse is compared to that induced by gravitational waves from astronomical sources.
The gravitational field of a laser pulse of finite lifetime, is investigated in the framework of linearized gravity. Although the effects are very small, they may be of fundamental physical interest. It is shown that the gravitational field of a linearly polarized light pulse is modulated as the norm of the corresponding electric field strength, while no modulations arise for circular polarization. In general, the gravitational field is independent of the polarization direction. It is shown that all physical effects are confined to spherical shells expanding with the speed of light, and that these shells are imprints of the spacetime events representing emission and absorption of the pulse. Nearby test particles at rest are attracted towards the pulse trajectory by the gravitational field due to the emission of the pulse, and they are repelled from the pulse trajectory by the gravitational field due to its absorption. Examples are given for the size of the attractive effect. It is recovered that massless test particles do not experience any physical effect if they are co-propagating with the pulse, and that the acceleration of massless test particles counter-propagating with respect to the pulse is four times stronger than for massive particles at rest. The similarities between the gravitational effect of a laser pulse and Newtonian gravity in two dimensions are pointed out. The spacetime curvature close to the pulse is compared to that induced by gravitational waves from astronomical sources.
Children’s interpretations of sentences containing focus particles do not seem adult-like until school age. This study investigates how German 4-year-old children comprehend sentences with the focus particle ‘nur’ (only) by using different tasks and controlling for the impact of general cognitive abilities on performance measures. Two sentence types with ‘only’ in either pre-subject or pre-object position were presented. Eye gaze data and verbal responses were collected via the visual world paradigm combined with a sentence-picture verification task. While the eye tracking data revealed an adult-like pattern of focus particle processing, the sentence-picture verification replicated previous findings of poor comprehension, especially for ‘only’ in pre-subject position. A second study focused on the impact of general cognitive abilities on the outcomes of the verification task. Working memory was related to children’s performance in both sentence types whereas inhibitory control was selectively related to the number of errors for sentences with ‘only’ in pre-subject position. These results suggest that children at the age of 4 years have the linguistic competence to correctly interpret sentences with focus particles, which–depending on specific task demands–may be masked by immature general cognitive abilities.
Children’s interpretations of sentences containing focus particles do not seem adult-like until school age. This study investigates how German 4-year-old children comprehend sentences with the focus particle ‘nur’ (only) by using different tasks and controlling for the impact of general cognitive abilities on performance measures. Two sentence types with ‘only’ in either pre-subject or pre-object position were presented. Eye gaze data and verbal responses were collected via the visual world paradigm combined with a sentence-picture verification task. While the eye tracking data revealed an adult-like pattern of focus particle processing, the sentence-picture verification replicated previous findings of poor comprehension, especially for ‘only’ in pre-subject position. A second study focused on the impact of general cognitive abilities on the outcomes of the verification task. Working memory was related to children’s performance in both sentence types whereas inhibitory control was selectively related to the number of errors for sentences with ‘only’ in pre-subject position. These results suggest that children at the age of 4 years have the linguistic competence to correctly interpret sentences with focus particles, which–depending on specific task demands–may be masked by immature general cognitive abilities.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
Filling the Silence
(2016)
In a self-paced reading experiment, we investigated the processing of sluicing constructions (“sluices”) whose antecedent contained a known garden-path structure in German. Results showed decreased processing times for sluices with garden-path antecedents as well as a disadvantage for antecedents with non-canonical word order downstream from the ellipsis site. A post-hoc analysis showed the garden-path advantage also to be present in the region right before the ellipsis site. While no existing account of ellipsis processing explicitly predicted the results, we argue that they are best captured by combining a local antecedent mismatch effect with memory trace reactivation through reanalysis.
Filling the Silence
(2016)
In a self-paced reading experiment, we investigated the processing of sluicing constructions (“sluices”) whose antecedent contained a known garden-path structure in German. Results showed decreased processing times for sluices with garden-path antecedents as well as a disadvantage for antecedents with non-canonical word order downstream from the ellipsis site. A post-hoc analysis showed the garden-path advantage also to be present in the region right before the ellipsis site. While no existing account of ellipsis processing explicitly predicted the results, we argue that they are best captured by combining a local antecedent mismatch effect with memory trace reactivation through reanalysis.
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
Ikonische Macht
(2016)
Bilder sind Teil der medialen Öffentlichkeit. Sie konstruieren Gesellschaft. Wie machtvoll sind sie? Die Studie analysiert die soziale Gestaltung von Pressefotografien in Tageszeitungen. In Feininterpretationen werden die gestalterischen Routinen der Redaktionen nachgezeichnet. Zudem wird gezeigt, wie bei der Veröffentlichung um die Auslegung der Bilder gerungen wird. Die Autorin entwickelt die qualitative Bildanalyse innovativ weiter und liefert zugleich einen eigenständigen Beitrag zur Diskussion der ,Macht der Bilder‘.
In this dissertation, an electric field-assisted method was developed and applied to achieve immobilization and alignment of biomolecules on metal electrodes in a simple one-step experiment. Neither modifications of the biomolecule nor of the electrodes were needed. The two major electrokinetic effects that lead to molecule motion in the chosen electrode configurations used were identified as dielectrophoresis and AC electroosmotic flow. To minimize AC electroosmotic flow, a new 3D electrode configuration was designed. Thus, the influence of experimental parameters on the dielectrophoretic force and the associated molecule movement could be studied. Permanent immobilization of proteins was examined and quantified absolutely using an atomic force microscope. By measuring the volumes of the immobilized protein deposits, a maximal number of proteins contained therein was calculated. This was possible since the proteins adhered to the tungsten electrodes even after switching off the electric field. The permanent immobilization of functional proteins on surfaces or electrodes is one crucial prerequisite for the fabrication of biosensors.
Furthermore, the biofunctionality of the proteins must be retained after immobilization. Due to the chemical or physical modifications on the proteins caused by immobilization, their biofunctionality is sometimes hampered. The activity of dielectrophoretically immobilized proteins, however, was proven here for an enzyme for the first time. The enzyme horseradish peroxidase was used exemplarily, and its activity was demonstrated with the oxidation of dihydrorhodamine 123, a non-fluorescent precursor of the fluorescence dye rhodamine 123.
Molecular alignment and immobilization - reversible and permanent - was achieved under the influence of inhomogeneous AC electric fields. For orientational investigations, a fluorescence microscope setup, a reliable experimental procedure and an evaluation protocol were developed and validated using self-made control samples of aligned acridine orange molecules in a liquid crystal.
Lambda-DNA strands were stretched and aligned temporarily between adjacent interdigitated electrodes, and the orientation of PicoGreen molecules, which intercalate into the DNA strands, was determined. Similarly, the aligned immobilization of enhanced Green Fluorescent Protein was demonstrated exploiting the protein's fluorescence and structural properties. For this protein, the angle of the chromophore with respect to the protein's geometrical axis was determined in good agreement with X-ray crystallographic data. Permanent immobilization with simultaneous alignment of the proteins was achieved along the edges, tips and on the surface of interdigitated electrodes. This was the first demonstration of aligned immobilization of proteins by electric fields.
Thus, the presented electric field-assisted immobilization method is promising with regard to enhanced antibody binding capacities and enzymatic activities, which is a requirement for industrial biosensor production, as well as for general interaction studies of proteins.
Udmurt as an OV language
(2016)
This is the first study to investigate Hubert Haider's (2000, 2010, 2013, 2014) proposed systematic differences between OV and VO language in a family other than Germanic. Its aim is to gather evidence on whether basic word order is predictive of further properties of a language. The languages under investigation are the Finno-Ugric languages Udmurt (as an OV language) and Finnish (as a VO language). Counter to Kayne (1994), Haider proposes that the structure of a sentence with a head-final VP is fundamentally different from that of a sentence with a head-initial VP, e.g., OV languages do not exhibit a VP-shell structure, and they do not employ a TP layer with a structural subject position. Haider's proposed structural differences are said to result in the following empirically testable differences:
(a) VP: the availability of VP-internal adverbial intervention and scrambling only in OV-VPs;
(b) subjects: the lack of certain subject-object asymmetries in OV languages, i.e., lack of the subject condition and lack of superiority effects;
(c) V-complexes: the availability of partial predicate fronting only in OV languages; different orderings between selecting and selected verbs; the intervention of non-verbal material between verbs only in VO languages;
(d) V-particles: differences in the distribution of resultative phrases and verb particles.
Udmurt and Finnish behave in line with Haider's predictions with regard to the status of the subject, with regard to the order of selecting and selected verbs, and with regard to the availability of partial predicate fronting. Moreover, Udmurt allows for adverbial intervention and scrambling, as predicted, whereas the status of these properties in Finnish could not be reliably determined due to obligatory V-to-T. There is also counterevidence to Haider's predictions: Udmurt allows for non-verbal material between verbs, and the distribution of resultative phrases and verb particles is essentially as free as the distribution of adverbial phrases in both Finno-Ugric languages. As such, Haider's theory is not falsified by the data from Udmurt and Finnish (except for his theory on verb particles), but it is also not fully supported by the data.
Wettbewerbsrecht
(2016)
Das Wettbewerbsrecht ist vor allem im Gesetz gegen den unlauteren Wettbewerb (UWG) geregelt, das den Begriff der "Lauterkeit" zur Feststellung der Zulässigkeit eines Geschäftsgebahrens verwendet. Das Rechtsgebiet wird daher in Abgrenzung zum europäischen Wettbewerbsrecht, das kartellrechtliche Fragen betrifft, auch als Lauterkeitsrecht bezeichnet.
Von besonderer Bedeutung sind die Vorgaben der europäischen Richtlinie über unlautere Geschäftspraktiken (2005/29/EG), die es bei der Auslegung des UWG stets zu beachten gilt. Diese Richtlinie hat bereits 2008 zu einer Novellierung des UWG geführt. Zur weiteren klarstellenden Richtlinienumsetzung steht derzeit eine grundlegende Novellierung des UWG bevor.
Das Wettbewerbsrecht ist Gegenstand des Schwerpunktbereichsstudiums. Da sich die Materie vor allem aufgrund europäischer Einflüsse im ständigen Wandel befindet, ist das Bedürfnis an einem aktuellen, aber zugleich konsequent auf Studienbedürfnisse zugeschnittenen Lehrbuch groß.
Der Grundriss stellt das aktuelle Wettbewerbsrecht studiengerecht mit vielen Übersichten, Schemata, Beispielen und einer Übungsklausur mit Lösung dar.
Vorteile auf einen Blick
- kompakte Darstellung des geltenden Rechts
- mit vielen Einstiegsfällen und Beispielen
- vom Autor des Parallelwerks Kartellrecht in der Grundriss-Reihe
Zielgruppe
Für Studierende und alle, die sich auf einfache Weise in das Wettbewerbsrecht einarbeiten wollen.
Software-as-a-Service (SaaS) offers several advantages to both service providers and users. Service providers can benefit from the reduction of Total Cost of Ownership (TCO), better scalability, and better resource utilization. On the other hand, users can use the service anywhere and anytime, and minimize upfront investment by following the pay-as-you-go model. Despite the benefits of SaaS, users still have concerns about the security and privacy of their data. Due to the nature of SaaS and the Cloud in general, the data and the computation are beyond the users' control, and hence data security becomes a vital factor in this new paradigm. Furthermore, in multi-tenant SaaS applications, the tenants become more concerned about the confidentiality of their data since several tenants are co-located onto a shared infrastructure.
To address those concerns, we start protecting the data from the provisioning process by controlling how tenants are being placed in the infrastructure. We present a resource allocation algorithm designed to minimize the risk of co-resident tenants called SecPlace. It enables the SaaS provider to control the resource (i.e., database instance) allocation process while taking into account the security of tenants as a requirement.
Due to the design principles of the multi-tenancy model, tenants follow some degree of sharing on both application and infrastructure levels. Thus, strong security-isolation should be present. Therefore, we develop SignedQuery, a technique that prevents one tenant from accessing others' data. We use the Signing Concept to create a signature that is used to sign the tenant's request, then the server can verifies the signature and recognizes the requesting tenant, and hence ensures that the data to be accessed is belonging to the legitimate tenant.
Finally, Data confidentiality remains a critical concern due to the fact that data in the Cloud is out of users' premises, and hence beyond their control. Cryptography is increasingly proposed as a potential approach to address such a challenge. Therefore, we present SecureDB, a system designed to run SQL-based applications over an encrypted database. SecureDB captures the schema design and analyzes it to understand the internal structure of the data (i.e., relationships between the tables and their attributes). Moreover, we determine the appropriate partialhomomorphic encryption scheme for each attribute where computation is possible even when the data is encrypted.
To evaluate our work, we conduct extensive experiments with di↵erent settings. The main use case in our work is a popular open source HRM application, called OrangeHRM. The results show that our multi-layered approach is practical, provides enhanced security and isolation among tenants, and have a moderate complexity in terms of processing encrypted data.
Im Nachklang der napoleonischen Kriege befreite sich Süd- und Mittelamerika von der kolonialen Herrschaft. Welche Einflussgrößen von da an die Staatlichkeit in diesem Gebiet geprägt haben, wird hier geklärt. Insbesondere die Rolle des Militärs, der Gewalt, der Guerilla und der USA als argwöhnischer Statthalter des Kontinents stehen im Zentrum der umfassenden Beiträge dieses Sammelbandes. So werden die Entwicklung und der Status quo politischer Herrschaft in Lateinamerika dargestellt, was Vorhersagen zu deren Zukunft ermöglicht.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
Leuchtkäfer & Orgelkoralle
(2016)
Leuchtende Käfer und Medusen, phosphoreszierende Meereswellen oder zu Stein erstarrende Korallen faszinierten den bisher vornehmlich als Dichter portraitierten Naturforscher Adelbert von Chamisso (1781–1838). Intensiver noch als den zoologischen und geologischen Phänomenen widmete er sich der Scientia amabilis – der liebenswerten Wissenschaft von den Pflanzen. Der vielseitig Talentierte verfasste seine Reise um die Welt (1836), die bis heute als eine der stilistisch anspruchvollsten und lesenswertesten Reisebeschreibungen gilt. Diese Studie widmet sich dezidiert den naturkundlichen Forschungen Chamissos im Kontext der dreijährigen Rurik-Expedition sowie den zugehörigen Textproduktionen. Mit einem umfassenden Text- und Materialkorpus werden literatur- und kulturwissenschaftliche sowie wissenschaftshistorische Fragestellungen an das Werk gelegt und ertragreich beantwortet. Für die Reiseliteraturforschung wird bisher unbeachtetes Quellenmaterial ans Licht gebracht, gängige Thesen werden widerlegt, Quellen anderer Besatzungsmitglieder vergleichend betrachtet. Die Studie stellt den Naturforscher Chamisso in den Fokus, ohne den Dichter auszublenden, und widmet sich Fragen der Generierung, Vernetzung und Darstellung naturkundlichen Wissens in Texten, Illustrationen und Materialien zur Expedition – sie ist insgesamt für die Literatur- und Geschichtswissenschaft ebenso innovativ wie für die interdisziplinäre Geschichte des Wissens.
Form und Inhalt
(2016)
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
Buyer-seller negotiations have significant impact on a company’s profitability, which makes practitioners aim at maximizing their performance. One lever for increasing bargaining performance is to pursue a clearly defined aspiration, i.e. one’s most desired outcome. In this context, the author explores the role of such aspirations in the three negotiation phases: preparation, bargaining, and striking a deal. She investigates determinants of aspirations, unintended consequences such as unethical bargaining behavior, and the consequences of overly ambitious aspirations. As a result, she does not only close existing gaps in negotiation research, but also derives valuable implications for practitioners
The author examines the cultural identity development of Oromo-Americans in Minnesota, an ethnic group originally located within the national borders of Ethiopia. Earlier studies on language and cultural identity have shown that the degree of ethnic orientation of minorities commonly decreases from generation to generation. Yet oppression and a visible minority status were identified as factors delaying the process of de-ethnicization. Given that Oromos fled persecution in Ethiopia and are confronted with the ramifications of a visible minority status in the U.S., it can be expected that they have retained strong ties to their ethnic culture. This study, however, came to a more complex and theory-building result.
Gregor Imelauer untersucht, wie vor dem Hintergrund von Globalisierung, Digitalisierung und Hyperspezialisierung eine wettbewerbsfähige HR-Organisation aussehen kann. Er zeigt, wie sich die Personalauswahl als einer der personalwirtschaftlichen Kernprozesse intelligent sourcen lässt, so dass nicht nur Kostenkriterien, sondern auch langfristig-strategische Implikationen Berücksichtigung finden. Grundlage der Untersuchung bilden 15 Interviews mit Top-Managern aus dem Personalwesen deutscher Großunternehmen sowie mit namhaften Organisationsberatern. Der Autor erarbeitet kritische Erfolgsfaktoren und konkrete Handlungsempfehlungen für das Sourcing personalwirtschaftlicher Prozesse und eine Personalarbeit im Netzwerk.
Nathalie Hirschmann geht der Frage nach, auf welche Weise sich die Sicherheitswirtschaft im System der Sicherheit zu etablieren sucht und wie erfolgreich ihr dies gelingt. Ihre Analyse verdeutlicht, wie Schmuddelimage und begrenzte Kompetenzzuschreibung der Branche einerseits erschweren, neben der Polizei als institutionelle Trägerin der öffentlichen Sicherheit zu bestehen, und andererseits, gegenüber dem Kunden bzw. Auftraggeber in ein professionelleres Gefüge zu treten. Einen inhaltsanalytisch theoriegeleiteten, soziologisch-konzeptionellen Blick einnehmend wird deutlich, welche Ausbaubestrebungen kognitiver und sozialer Art die Sicherheitswirtschaft vorgenommen hat und wo diese an ihre Grenzen stoßen.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
Exhaustivity
(2016)
The dissertation proposes an answer to the question of how to model exhaustive inferences and what the meaning of the linguistic material that triggers these inferences is. In particular, it deals with the semantics of exclusive particles, clefts, and progressive aspect in Ga, an under-researched language spoken in Ghana. Based on new data coming from the author’s original fieldwork in Accra, the thesis points to a previously unattested variation in the semantics of exclusives in a cross-linguistic perspective, analyzes the connections between exhaustive interpretation triggered by clefts and the aspectual interpretation of the sentence, and identifies a cross-categorial definite determiner. By that it sheds new light on several exhaustivity-related phenomena in both the nominal and the verbal domain and shows that both domains are closely connected.
DNA origami nanostructures are a versatile tool to arrange metal nanostructures and other chemical entities with nanometer precision. In this way gold nanoparticle dimers with defined distance can be constructed, which can be exploited as novel substrates for surface enhanced Raman scattering (SERS). We have optimized the size, composition and arrangement of Au/Ag nanoparticles to create intense SERS hot spots, with Raman enhancement up to 1010, which is sufficient to detect single molecules by Raman scattering. This is demonstrated using single dye molecules (TAMRA and Cy3) placed into the center of the nanoparticle dimers. In conjunction with the DNA origami nanostructures novel SERS substrates are created, which can in the future be applied to the SERS analysis of more complex biomolecular targets, whose position and conformation within the SERS hot spot can be precisely controlled.
DNA origami nanostructures are a versatile tool to arrange metal nanostructures and other chemical entities with nanometer precision. In this way gold nanoparticle dimers with defined distance can be constructed, which can be exploited as novel substrates for surface enhanced Raman scattering (SERS). We have optimized the size, composition and arrangement of Au/Ag nanoparticles to create intense SERS hot spots, with Raman enhancement up to 1010, which is sufficient to detect single molecules by Raman scattering. This is demonstrated using single dye molecules (TAMRA and Cy3) placed into the center of the nanoparticle dimers. In conjunction with the DNA origami nanostructures novel SERS substrates are created, which can in the future be applied to the SERS analysis of more complex biomolecular targets, whose position and conformation within the SERS hot spot can be precisely controlled.
Sixteen new ionic liquids (ILs) with tetraethylammonium, 1-butyl-3-methylimidazolium, 3-methyl-1-octylimidazolium and tetrabutylphosphonium cations paired with 2-substituted 4,5-dicyanoimidazolate anions (substituent at C2 = methyl, trifluoromethyl, pentafluoroethyl, N,N′-dimethyl amino and nitro) have been synthesized and characterized by using differential scanning calorimetry (DSC), thermogravimetric analysis (TGA). The effects of cation and anion type and structure of the resulting ILs, including several room temperature ionic liquids (RTILs), are reflected in the crystallization, melting points and thermal decomposition of the ILs. ILs exhibited large liquid and crystallization ranges and formed glasses on cooling with glass transition temperatures in the range of −22 to −71 °C. We selected one of the newly designed ILs due to its bigger size, compared to the common conventional IL anion and high electron-withdrawing nitrile group leads to an overall stabilization anion that may stabilize the metal nanoparticles. Stable and better separated iron and silver nanoparticles are obtained by the decomposition of corresponding Fe2(CO)9 and AgPF6, respectively, under N2-atmosphere in newly designed nitrile functionalized 4,5-dicyanoimidazolate anion based IL. Very small and uniform size for Fe-nanoparticles of about 1.8 ± 0.6 nm were achieved without any additional stabilizers or capping molecules. Comparatively bigger size of Ag-nanoparticles was obtained through the reduction of AgPF6 by hydrogen gas. Additionally, the AgPF6 precursor was decomposed under microwave irradiation (MWI), fabricating nut-in-shell-like, that is, core-separated-from-shell Ag-nano-structures.